LiveXAML and Xamarin.Forms

|
Full disclosure. This blog post is sponsored by LiveXAML. I was sent a free license to try it out and mention it in a blog post. All opinions are my own and I was not paid to say nice things about the product.
I do not usually work with Xamarin.Forms, but Mihhail from LiveXAML contacted me and asked whether I wanted a license for LiveXAML to test out and mention them in a blog post. I thought "why not", and here are some brief thoughts and experiences with the tool.

First of all, LiveXAML is not the only product on the market. There are other players, such as Continuous, Gorilla Player and now Xamarin also have their Live Player. They all function a little bit different from each other, but their goals are very much the same. To show changes in the UI, while you are writing code.

I tried LiveXAML with the following setup:
  • Visual Studio 2017 15.2
  • Xamarin.Forms 2.3.4.247
  • LiveXAML 1.2.12
  • Android x86 7.1 image running in a Hyper-V VM
The project I worked with LiveXAML in was a sample project, where I have a ListView showing some images and names of cheeses. All the code was contained in a PCL. To get LiveXAML working I simply installed the Visual Studio 2017 extension. I also installed the LiveXAML NuGet into the PCL project.
For projects using Shared projects, you need to install LiveXAML into your application projects, which consume your Shared project.

This is basically it. Once the very small setup is done, you are ready to play around. Just start debugging your application. Fire up a XAML page and start editing. Every time you press save, LiveXAML will pick up the changes and they will quickly be reflected in the UI in the App.


See the video above to see it in action. In this video I edit some simple XAML, where I change the height and width of the images, then apply a transformation to add rounded borders. Every time I press CTRL + S to save, the emulator updates the view, which is super usefull when playing around with UI in XAML.

LiveXAML seems to work pretty well and seems stable, I only tried it in a Xamarin.Forms Android project, but the LiveXAML web page says that iOS projects work as well. There is also some preliminary VS4Mac support for those of you who use that. It installed and ran fine in an emulator, that is not mentioned as supported on the web page, which is great. It ran with external assemblies, in this case FFImageLoading, where images and transformations where shown correctly. To me it seems that any valid XAML will work with LiveXAML and be rendered on the device!

The difference between LiveXAML and Xamarin Live Player, is that with the former, you install a server in your app, which a plugin in Visual Studio can send commands to. So when saving your XAML it sends a payload which then is rendered on the screen.
With Xamarin Live Player, you don't add dependencies to your app. However, instead you install an App on your device you want to run on, then there is a pairing process with the IDE. Running the app, it seems like it is running inside of the Xamarin Live Player app and changes to any part of your code is reflected. This works with coded UI as well.
Like LiveXAML, Gorilla Player seems to be limited to XAML based UI and it would seem it works in a similar fashion, where your project takes a dependency on a Gorilla Player nuget and you need to set up Gorilla Player in your App startup.

All in all I was quite pleased how well LiveXAML works and how easy it is to get started. It just works. I have not tried it on a real device, but given that the device is on the same network as your PC, I think it may work just fine. If I were making Xamarin.Forms applications professionally, I would really consider LiveXAML as a means to create and debug XAML layouts.

Upgrade notes for MvvmCross 5.x on iOS

|
This post is just a couple of notes about some of the changes that affected some of my apps when updating to MvvmCross 5.x on Xamarin.iOS.

IMvxModalIosView or MvxModalNavSupportIosViewPresenter could not be found

MvvmCross has replaced the presenter logic, which before looked for the IMvxModalIosView interface in order to figure out how to present a ViewController. You should instead use the MvxModalPresentation attribute for your ViewController.

Previously you also had to use MvxModalNavSupportIosViewPresenter for it to understand the IMvxModalIosView interface. This is now all baked into the default presenter.

For this attribute you can also give it a couple of hints about how to display itself through a couple of properties.

  • Animated, set this to false to disable animations when presenting your modal ViewController, defaults to true
  • PreferredContentSize, set this to your content size on iPad, since they are not shown full screen default there
  • ModalTransitionStyle, set this to the desired transition style, cover, flip, cross dissolve or curl, this will not work when Animated is set to false, defaults to CoverVertical
  • ModalPresentationStyle, set this to the desired presentation style. This depends on the LayoutSize and in most cases on iPhone will default to FullScreen. This is mostly to change presentation on iPads.
Here is how the changes will reflect in your code:


Changing to the new presenter for Modals is pretty painless.

CreateNavigationController has changed signature

The MvxIosViewPresenter has changed the signature of CreateNavigationController, this is luckily a very simple change.

MvxTabBarViewController is presented as child?

Another thing in the new presenter for iOS, is that some ViewControllers, cannot be presented as children and need to be presented as root instead. This is something that will be fixed in MvvmCross 5.0.4 since this is unwanted behavior. You should be in control of how you want your ViewControllers presented. For now in 5.0.0-5.0.3 you can make a small change to fix this.


MvvmCross.Dialog is gone?

Yes. It was unmaintained and upstream MonoTouch.Dialog has been unmaintained for just as long time. The MvvmCross team does not want to support something that no one wants to fix.

What should you use instead? How about Xamarin.Forms? Or plain iOS Tables and Views? Sorry, no shortcuts here. You could recompile Dialogs from MvvmCross 4.x against 5.x yourself.

What I have opted for is using MvxTableViewController and making my own Adapters for cases where I need to display different kind of cells, groupings etc. It is not that much work and you are not dependent on a 3rd party library here. You can still bind MvxTableViewCells to whatever you want and TwoWay bindings work much better than MonoTouch.Dialogs ever did.


Identifying users with HockeyApp

|
I use HockeyApp for crashes and App analytics. Most of the time users that are logged into the App have different accounts than what they use with HockeyApp, so most of the time I can't use the LoginManager.

Instead, to Identify users I do the following instead.

Android

Create an implementation of CrashManagerListener, which overrides UserID, Contact and Description.
Then use it as follows when registering the CrashManager

iOS

On iOS you need to implement BITHockeyManagerDelegate where you need to override UserIdForHockeyManager(), UserNameForHockeyManager() and UserEmailForHockeyManager().
Then right before starting the SharedHockeyManager you need to register your Delegate.
Now the User and Contact columns in a crash report on HockeyApp should be filled out with the values you provided.

Firebase Cloud Messaging in Xamarin.Android

|
I've seen people asking a lot about this lately on XamarinChat Slack. Looking around there does not seem to be much good information on how to get Firebase Cloud Messaging (FCM) to work in a Xamarin.Android project. I made the switch in one of my apps a month or so ago, from Google Cloud Messaging to FCM and I too was lacking the information about how to get this to work. I did get it to work by piecing together information from the Java world. Surprisingly enough it is not that much different in Xamarin.Android.

1. Creating a Firebase Project

If you have not already created a Firebase project you need to do so now. You can either opt to import an existing Google project or Creating a new. Either way the instructions are pretty much the same.

1. Go to the Firebase Console

2. Create a new project or import Google Project


I don't think it is super important what region you choose your project to be in. I just chose the region where I expect the most users to use my App.

2. Registering Application

When you have created your project or imported an existing one you can now add your application. This can be done several places in the console. I just went to Notifications and it prompted me to add an application.
Pressing the Android icon will show you the following dialog
Here you need to enter the details of your Application. Package name is the package name you have set in your AndroidManifest.xml file. You might want to change that to something resembling your namespace in your app. Make sure that it is all lowercase too. Nickname is optional and is just an easy way to identify your app in the Console. Press Add App or optionally do step 2.1 first to add a SHA-1, then press Add App. The browser will now download a google-services.json file, which we will use later, in step 4.

2.1 Creating a keystore (optional)

If you haven't already created a keystore file for your App, you can do so now. For more information refer to the Xamarin Documentation.
You will need it for deploying to Google Play Store anyways. You will need it for Google Play Services, such as Google Maps. Easiest way to do so is through the Archive tool in Visual Studio or Xamrin Studio, which you can trigger by right clicking your Android Application project. Make sure to have selected Release Configuration and your App can build.
At the bottom of the Archive screen, when an Archive is ready, click Distribute... > Ad Hoc > +
A dialog for creating a new keystore will appear, which you will need to fill out with relevant details. Create your keystore and now we need to find it on the computer.
Note: A keystore is used to sign the application, for deployment on the Google Play Store. Many Google Play Services require a SHA-1 entered into the Google Console for your App package
On Windows, Xamarin Archives put the keystore in C:\Users\<username>\AppData\Local\Xamarin\Mono for Android\Keystore you can find the one you just created in a subfolder maching the name of the keystore you just created. To get the SHA-1 from a keystore you run the keytool command line tool. If it isn't in your environment PATH, it is located in the Java SDK folder. In my case C:\Program Files\Java\jdk1.8.0_102\bin lets just assume you don't have it in your environment PATH the command to get the SHA-1 would look something like

"C:\Program Files\Java\jdk1.8.0_102\bin\keytool.exe" -exportcert -list -v -alias yourkeystorealias -keystore C:\Users\<username>\AppData\Local\Xamarin\Mono for Android\Keystore\fcmtest\fcmtest.keystore

yourkeystorealias being the name of your keystore in this case. Although a keystore can contain multiple aliases, which the Xamarin Archive tool does not seem to care about. The output will look like follows
We need the SHA-1, you can enter that in the certificate SHA-1 for your Firebase Project.
You can also grab the one from the debug.keystore in the C:\Users\<username>\AppData\Local\Xamarin\Mono for Android folder and enter that later as an additional fingerprint in the Firebase console.

3. Adding FCM NuGet to your project 

Before we proceed and before we can do 4. where we add the google-services.json to our project. We need to add some NuGet for the FCM stuff, but also for the Build Action needed in 4., which comes with the Google Play Services Nuget.

Add the Xamarin.Firebase.Messaging NuGet to your project. Currently there is only a pre-release version of it. Hence you need to tick the Include Prerelease box in your NuGet package manager.

 

4. Adding google-services.json to the application

We will now add the google-services.json file from 2. to the Android project.

Right click your project, select Add > Exisiting Item find the google-services.json in the folder your browser downloaded it to, then press Add.

Now we need to set the build action of that file to GoogleServicesJson.
Note: If the GoogleServicesJson Build Action is not present, follow the instructions in this GitHub issue which tells you to: 1. clean/rebuild your project. 2. restart Visual Studio/Xamarin Studio. 3. make sure that the csproj file includes Xamarin.GooglePlayServices.Basement.targets
Add the Build Action by right clicking the google-servies.json file you added to your project. Click Properties. In there set the Build Action.

5. Adding FCM receivers to the AndroidManifest

Inside the AndroidManifest.xml we need to add a couple of entries for the FCM BroadcastReceivers, which are not added by the NuGet package.

Locate the <application> tag in your manifest. Inside of that node add the following code

${applicationId} will get replaced at build time, with your App's package name. Make sure your package name conforms to Android's guidelines.

6. Implementing FirebaseInstanceIdService and FirebaseMessagingService

Now we need to add two services in our App. FirebaseInstanceIdService, which gives us the token used to identify the FCM registration of this particular device. This is normally used with a backend to tell it how to send notifications to the device.
FirebaseMessagingService is the service, which will be invoked when we receive a notification in the application. Here you prepare the notification to be displayed with data coming from the FCM.

6.1 FirebaseInstanceIdService

In order to receive the token from the App's registration with FCM, we need to implement FirebaseInstanceIdService. In here you would save the token and communicate it to the backend, which sends the notifications to the App.


What I usually do in step 1. and 2. in the todo comment is to check with the token I saved in SharedPreferences whether it has changed and override it if it has. Then I send a request to my backend with the new token.


6.2 FirebaseMessagingService

The FirebaseMessagingService is where we receive our notifications. Here you will get the data out of the notification sent to you. If you have sent extra stuff with your notification, take a look at the Data property on the RemoteMessage argument we get in OnMessageReceived. This will probably also apply to raw notifications that do not trigger any visual feedback.

In either case here is some sample code to get you started. Here I simply grab the Title and Body sent with the notification and present it with an Intent to start my MainActivity. The Intent accepts extras which you can add to it and when the notification triggers your Activity you can pull that data out and act accordingly.

That should be it! Build and fire up your Application and use the Firebase Console to send some test notifications.





The GooglePlayServicesComponents repository that Xamarin has on GitHub, contains a sample application that you can take a look at if you need a starting point. You can also take a look at the documentation from the Firebase Component not yet in the Xamarin Component store.

Face detection on iOS

|
I've been playing a bit with the Camera on iOS lately and released a photo gallery/camera library called Chafu. iOS provides a fairly easy API to do all sorts of stuff, such as reading bar codes, QR codes and some other types of machine readable codes. It also supports finding faces!

So as a fun feature I decided to figure out how to detect a face and show it live in the preview on screen when taking a photo, or recording a video, then add it in Chafu.

Note: iOS 7 added functionality to AVFoundation to do all the above, so what I describe here is iOS 7 and up, keep that in mind if you intend to do this on earlier versions.
In this blog post I assume you already know how to set up a AVCaptureSession, AVVCaptureDeviceInput and AVCaptureVideoPreviewLayer to preview what is coming from the the Camera.

Setup face detection

First we need to add AVCaptureMetadataOutput to our session, this class is what detects faces and requires IAVCaptureMetadataOutputObjectsDelegate to be implemented in the class, which is where we get a callback when faces are detected.

Setting up a AVCaptureMetadataOuput is simple
  1. Instantiate it an instance of it
  2. Add it to the AVCaptureSession
  3. Find out if faces metadata is available
  4. Setup callback if 3. is OK


Notice this in the callback? That is the implementation of IAVCaptureMetadataOutputObjectsDelegate. I decided to add it directly to my Camera class. The implementation looks somewhat like follows.


The exportis pretty important, as this is how we tell the iOS world that we have implemented the interface, it is how it gets hold of our code.

Now hopefully. When you run this code and set a break point in the DidOutputMetadataObjects method, it would get hit when a face is detected. The argument metadataObjects will contain AVMetadataFaceObjects, which contains all you need to visually show the detected faces. 


Display faces

To display faces I like using iOS layers, which gives us the possibility to do rotations and other transformations pretty easily. I will show this later in this post how to utilize the Roll and Yaw from the AVMetadataFaceObject when drawing the visual indicator for a face.

First we need to set up the CALayer we will add detected faces to. This is the easiest as it will be easier later on just to clear the sublayers when we don't want to show faces anymore.


Now we can add faces to that layer in our DidOutputMetadataObjects method. This is done in a couple of simple steps.

  1. Iterate the metadataObjects array, which we get as argument
  2. Make sure these are indeed AVMetadataFaceObject
  3. Transform the object through AVCaptureVideoPreviewLayer's GetTransformedMetadataObject to get the correct coordinates for the face
  4. Create a new CALayer for the face with a border or the desired effect
  5. Set Bounds of the CALayer to what we got from the transformation
  6. Add it as sublayer in the overlayLayer we created earlier
 Lets start by defining a method for how we want this layer which will show the face to look like.



Here I simply create a new CALayer and set the border color, width and corner radius. So what we will see are white squares with border width 2 and rounded corners. Simple!

Now, lets add this layer to the overlayLayer so we can actually see something on the screen.


That is it! Now you should have some squares showing up for faces. One problem though. This might add a whole bunch of layers. So we need to remove the sublayers before adding the new ones.


Call RemoveFaces() method before you iterate metadataObjects in DidOutputMetadataObjects and this should.
The observant reader, might notice that this seems inefficient. I won't cover this in this article. However, you can see one approach to solve this in BaseCameraView in Chafu, where I keep track of the FaceId from the AVMetadataFaceObject and simply adjust bounds for that face if it has moved.

Adjusting for Yaw and Roll angles

The AVMetadataFaceObject gives us a RollAngle for when you rotate your head around the Z axis. It also gives us a YawAngle for rotations around the Y axis.

Picture from StackOverflow: http://stackoverflow.com/q/16401505/368379
Supporting RollAngle is easy as it does not require that we rotate into the Z plane, but rather around it. However, rotations around the Y axis, will move the rectangle into the Z plane. Per default, CALayer is flat and has no idea of perspective. We can fix that! Back to where we create the overlayLayer we need to add a simple transformation which will gives us this perspective.


What this does is to take the default transformation and add a distance to the 3D projection plane in terms of 1/z. In other words we add depth. Apple does this in reverse. Hence, -1/z is used, where z is the distance, in this case I use 1000. The bigger the value, the bigger the distance. For a more detailed explanation you could start by reading about 3D projection on Wikipedia.

Now we can do our rotations to the face CALayers.

Roll

To make a Roll rotation we simply create a new CATransform3D using the static MakeRotation method, which allows us to rotate around any axis. As shown above roll rotations are around the Z axis. CATransform3D expects the angle in radians. Hence, we need to convert that first.


Pretty simple. We will apply this to the face layer later.

Yaw

The Yaw rotation is a bit more involved. We need to know the orientation of the device as the Y axis changes depending on the orientation, and we need to adjust the angle for that. This means that faces are always detected in the same orientation. However, our preview layer will change along the orientation.


Now we need to make the Yaw transformation and combine with the orientation transformation.


Finally, apply the two transformations to the face CALayer, in the DidOutputMetadataObjects method.


Notice, I add a default transformation to the faceLayer and concatenate the roll and/or yaw transform according to availability. That is it. Now, you should have something like this. Screenshots are taken from Chafu, which demonstrate detection with no rotation, then with roll and then with yaw.

No rotation
Roll
Yaw