• 10 heures
  • Moyenne

Ce cours est visible gratuitement en ligne.

course.header.alt.is_video

course.header.alt.is_certifying

J'ai tout compris !

Mis à jour le 04/07/2019

Understand gestures on a touch screen

Connectez-vous ou inscrivez-vous gratuitement pour bénéficier de toutes les fonctionnalités de ce cours !

We’ve come a long way and there’s still a bit left to complete our app. At the moment it feels a bit rigid and we haven’t implemented some of the intended functionalities.

A key benefit of smart mobile devices nowadays is an ability to interpret gestures! We’ve already dealt with gestures while working with buttons. When we tap on a button, we perform a gesture and our button element handles this gesture. As usual, there's always more!

Let’s explore what gestures we have available, pick those that will serve us best, and discover how we can handle them!

Exploring Gestures

Gestures are the way people interact with a touchscreen. In many cases, this is the only way they interact with the primary functionally of a device.

These are the key gestures used in iOS:

Key gestures in iOS
Key gestures in iOS

Some of them can be single or multi finger - like tap, can be single- or multi-finger, when pinch or rotation must involve at least 2 fingers.

This sounds cool!

How do we get our hands on all of it?

To work with gestures we need to use gesture recognizers!

Gesture Recognizers

In general, gesture recognizers are represented by 2 components - Gesture Recognizer and Touch. They are represented by the respective classes: UIGestureRecognizer  and  UITouch . 

  • UIGestureRecognizer and it's subclasses are sufficient to work with gestures as a whole - taps, swipes etc.

  • UITouch is used when finer functionality is needed - like touch and move for example.

As we explored earlier in this chapter, there is an array of gestures available to using iOS. Each variation, for convenience, is represented by a subclass of the base gesture recognizer.

Here're subclasses for key gestures:

  • UITapGestureRecognizer: This class regards the tap gestures made on a view. It can be used to handle single or multiple taps, either with one or more fingers. Tapping is one of the most usual gestures that users make.

  • UISwipeGestureRecognizer: Another important gesture is the swipe, and this class exists just for it. Swiping happens when dragging a finger towards a direction (right, left, top and down). A characteristic example of the swipe gesture exists on the Photos app, where we use our fingers to slide from one photo to another.

  • UIPanGestureRecognizer: The pan gesture is actually a drag gesture. It’s used when it’s needed to drag views from one point to another.

  • UIPinchGestureRecognizer: When you view photos on the Photos app and you use your two fingers to zoom in or out to a photo, then you perform a pinch gesture. As you understand, pinching requires two fingers. An object of this class is usually handy to change the transform of a view, and more specifically its scale. Using pinch gestures for example, you can implement zoom in and out to photos on your own apps.

  • UIRotationGestureRecognizer: In accordance to the previous gesture, rotation is used to rotate a view using two fingers.

  • UILongPressGestureRecognizer: An object of that class monitors for long press gestures happening on a view. The pressing must last long enough in order to be detected, and the finger or fingers should not move a lot around the pressed point otherwise the gesture fails.

  • UIScreenEdgePanGestureRecognizer: This one is similar to the swipe gesture, but with a great difference: The finger movement should always begin from an edge of the screen.

Utilizing Gestures

There are 2 ways to  incorporate gestures in the project:

  • Using Interface Builder to create gesture recognizers and provide association with the code by dragging the connection. Similar to the way we are creating outlets and actions.

  • Programmatically - by creating objects and associations exclusively in the code.

Which approach is better? 

Both approaches are suitable. In practice, in larger projects (and consequently bigger development teams), programmatic approach is more common.

Implementation components

Gestures involve 3 main components:

  • Gesture object - Tap, Swipe etc.

  • Associated element for the gesture - Tapping on an image view for example.

  • Delegate for recognition action - an object that will handle the gesture - will perform some action when gesture has been recognized.

  • Target - the action to perform.

When creating gestures via Interface builder, the only thing we need to provide is the target. The rest of components is derived from the connection - the UI element we drag it from, the delegate is the code file we drag it to and the gesture type - what we configure it as we drop it before creating a target.

Programmatic implementation, on the other hand, requires specifying all those aspects in the code.

Managing gesture's progress

There are 2 components available to us to observe or manage gestures:

  • State - defined as property to reflect the lifecycle of a gesture - when it starts, finishes or gets canceled (abandoned).

  • Touch object(s) - particularly description(s) of a gesture - force, positioning, speed, direction etc. 

Here are the available states for gestures:

  • Continuous & discrete gestures

    • possible

    • failed

    • recognized

    • canceled 

  • Continuous gestures (i.e. pan) transition from possible:

    • began

    • changed

    • ended

Gestures are very broad and powerful segment of iOS development. in your career you'll come across a long array of variations of how gestures can be used.

Now, let's get practical!

FrameIT + Gestures

We'll start with the key gesture we need to implement for our app - a tap on image view to present image picking options - the alert controller we initially placed in share method to test out its functionality.

Implementing Tap gesture

When know there are two ways to implement a gesture functionality. As we established earlier - there are two ways to do it - using interface builder and the code. We are going to do both! :p

Using Interface Builder

We are going to use Interface builder to create a tap gesture associated with our creation image. For that we need to accomplish 2 things:

  • Add a tap gesture recognizer to the image view.

  • Assign an action to the tap gesture recognizer.

To a a tap gesture recognizer to the image, spot tap gesture recognizer in the Object library and drag it onto the creation image view in the storyboard:

Adding gesture recognizer to a storyboard view
Adding gesture recognizer to a storyboard view

Notice that after adding this new element, we have it added to the component tree on our screen, which we can then configure using Attributes Inspector:

Gesture recognizer in storyboard
Gesture recognizer in storyboard

In the tree of view controller elements the gesture recognizer does not appear near the image view.

How do they know of each other?

Right click on gesture recognizer and image view to observe the mutual connection:

View - Gesture recognizer connection
View - Gesture recognizer connection

And finally, let's assign an action to the gesture recognizer so that we have something happening when the user taps on the image view. You already know how to create action using interface builder - we did it for buttons. This time, select tap gesture recognizer and Ctrl + Drag it to the view controller until you see the blue guidelines appear:

Creating tap action in interface builder
Creating tap action in interface builder

Then configure the action in a popup:

Configuring tap gesture action
Configuring tap gesture action

And what's the implementation? o_O 

We've got the implementation already, just need to move it from share method - out displayImagePickingOptions method:

@IBAction func changeImage(_ sender: UITapGestureRecognizer) {
    displayImagePickingOptions()
}

And let's remember to remove it from the test location - share method:

@IBAction func share(_ sender: Any) {
    if let index = colorSwatches.index(where: {$0.caption == creation.colorSwatch.caption}) {
        savedColorSwatchIndex = index
    }
}

Let's test! :zorro: Run the app, click on the image place holder, you should get the image picking options action sheet!

Now, click on 'Share' button - it shouldn't produce any visible effect!

Great, and now, let's accomplish the exact same thing in code.

Implementing gesture recognizers programmatically

We need to describe a gesture and attach to the target - the method we already have - changeImage().

We want our gesture available right when the app is launched, so it has to be part of our initial configuration. So, this code goes to the configure method:

// create tap gesture recognizer
    let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(changeImage(_:)))
    creationImageView.addGestureRecognizer(tapGestureRecognizer)

Let's review the above:

  • We've declared a variable of UITapGestureRecognizer type.

  • Set the gesture recognizer target to self - means our view controller should get notified when a tap is recognized.

  • Specified action - the changeImage method.

  • and, finally, added gesture recognizer variable to the collection of gesture recognizers associated with the creation image view.

Now test it! - You should observe the same results as before...

Code clean-up

Let's sweep the floor. >_< We've got tap gesture recognizer defined in Interface builder and in the code. Pick one, remove the other!

Which one to choose?

Up to you. Moving forward, we'll be implementing gesture recognizers in the code. However, it will not affect our current implementation if you chose to keep the IB version.

To ensure we've got it intact, run the app again - make sure it still functions as expected!

Implementing transforming gestures

The most fun part of our app is transforming the image. According to requirements we need to provide following interactions:

  • Move

  • Rotate

  • Zoom in/out

And we've got all the needed gesture recognizers to support our need:

  • UIPanGestureRecognizer - to translate the movement.

  • UIRotationGestureRecognizer - to manage rotation.

  • UIPinchRecognizer - to process zooming in and out - scaling.

Let's create the needed gesture recognizers following our tap gesture recognizer creation:

let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(moveImageView(_:)))
creationImageView.addGestureRecognizer(panGestureRecognizer)
        
let rotationGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateImageView(_:)))
creationImageView.addGestureRecognizer(rotationGestureRecognizer)
        
let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(scaleImageView(_:)))
creationImageView.addGestureRecognizer(pinchGestureRecognizer)

And corresponding action methods:

@objc func moveImageView(_ sender: UIPanGestureRecognizer) {
    print("moving")
}
    
@objc func rotateImageView(_ sender: UIRotationGestureRecognizer) {
    print("rotating")
}
    
@objc func scaleImageView(_ sender: UIPinchGestureRecognizer) {
    print("scaling")
}

If you test it now and attempt to perform each of the described gestures, you should get a respective output in console - one action per gesture attempt.

Here's a trick - we need them function simultaneously (or at least zooming and rotating would be expected to go together)! So that when user is using 2 fingers to zoom, they may also want to rotate without letting go and perhaps even move at the same time.

This is rather advanced functionality and we need to implement a method that will define for the view of our screen that simultaneous gestures are permitted:

func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
                           shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer)
        -> Bool {
        // conditions for simultanious gestures
    }

The function return true or false. This defines whether associated gesture actions are handled or not.

Here are our conditions we need to describe in the code of this function:

  • simultaneous gesture recognition will only be supported for creationImageView

  • neither of the recognized gestures should not be tap gesture or pan gesture (we'll separate the moving actions from scaling and rotating for the moment)

And, here's the implementation:

func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
                           shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer)
        -> Bool {
            
    // simultaneous gesture recognition will only be supported for creationImageView
    if gestureRecognizer.view != creationImageView {
        return false
    }
            
    // neither of the recognized gestures should not be tap gesture
    if gestureRecognizer is UITapGestureRecognizer
        || otherGestureRecognizer is UITapGestureRecognizer
        || gestureRecognizer is UIPanGestureRecognizer
        || otherGestureRecognizer is UIPanGestureRecognizer {
        return false
    }
            
    return true
}

 If you run the app now, you'll realize that our new improvement has no effect. :( This is because gesture recognizers don't know which object to ask for the simultaneous execution permission. We need to assign a delegate to it, so let's add this to our configuration:

...
panGestureRecognizer.delegate = self
...
rotationGestureRecognizer.delegate = self
...
pinchGestureRecognizer.delegate = self

And since we are stating our view controller can handle advanced functionality for gesture recognizers, we need to add  UIGestureRecognizerDelegate  to the view controller declaration:

class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate, UIGestureRecognizerDelegate {
...
}

Test it now! You should observe scaling and rotating gestures handled during single interaction, so you'll get a mix of printouts in the console output:

rotating
scaling
...

 Well done, my friend!

Let's Recap!

  • There are two classes for handling gestures in iOS  UITouch  and  UIGestureRecognizer  .

  • UIGestureRecognizer and its subclasses make it easy to add classic gestures to an interface.

  • Utilizing gesture recognizer comes down to associating a particular gesture with a UI element and connecting it to a handling target - perform an action associated with a gesture or it's particular state.

  • Gestures can be followed by monitoring its state property that reflects gesture's progress. 

  • Particulars of gestures during all its states can be accessed view associated touch object(s).

Exemple de certificat de réussite
Exemple de certificat de réussite