This year’s Google I/O brought novelties straight outta Sci-Fi movie. You’ve already most probably seen most of the news roaming around the Internet, or at least heard about things like Google Assistant or the newest version of Android’s OS.
For what we have seen, the upcoming months are going to be all about artificial intelligence - which wasn’t difficult to predict. AI is obviously the hottest topic right now - but what is really significant, Google I/O did not only give us more AI but also finally started talking about allowing developers to create their own intelligent solutions using their technology. AI with Google tag is always great news, but the chance to develop your own ideas utilizing their tools and experience is obviously even more exciting.
Let’s take a closer look at what this year’s edition of this big developer festival brought us, and most importantly - what it changes in the context of application development.
To organize: Android Jetpack
Let’s begin with something overlooked by many tech websites, yet so significant for Android developers: Android Jetpack. Behind this quite quirky name hides the great idea of finally organizing and formulating Android’s sources and components.
For those familiar with Android development it’s a well-known fact that if you were to use tools provided on the platform, you need to have the ability to manage chaos. Lots of components you are forced to write from scratch every single time, even though you should be able to utilize what was already created. Being a little cynical: you almost need support to use Support Library.
In addition to that, you have to manage all versions of the platform. Every single one of them differs in code, so if you want to keep compatibility, you have to use libraries provided by Google. The new features are available for the newest version of the system, and they are also added as libraries. Developers always use the newest versions of support libraries, because they contain solutions from older versions of the system.
For example, a year ago, Google implemented the option of adding a font to your application. If you are targeting your app for the newest version of OS only (which almost never happens), you don’t have to enclose the library, just simply declare the font. But if you are targeting your app for the few older versions of the system as well, you to have to enclose the support library and declare the font in a way that is comprehensible for both the new version and the library.
Finally, Google wants to fix that problem - pull out singular components out of the versions, pack in separate packages and then share with developers to enjoy. Also, the way in which those libraries are released will be different - earlier they used to be released as one Support Library, consisting of several platform versions. This led to a situation in which whenever a bug occurred in one library, you had to wait for the whole thing to be released as a set, with that one thing being fixed. Now everything has to be separated so that every bug fix can be made up to date.
Summing things up: Android Jetpack is a set of components, tools, guidance, videos, everything that may improve developer’s work and make their life easier. It finally brings together the existing Support Library - and adds new cool things. Those new packages -being more specific, the extension library - will be known as AndroidX. Developers will have to draw components from them separately, but at the same time, will be fully independent of the platform version.
Google encourages developers to use the new facilities wholeheartedly. They even provide them with tools that help them use the new libraries - going as far as tools that go through the code in order to find the old versions of the components and rewrite them for the new ones. Google is also open for the feedback - as they say, they want to fix any bugs and make the experience of building Android apps even better. Good move, Google - we keep an eye on you.
To customize: Material Design
There were voices saying that Material Design, although undoubtedly clean and intuitional, puts lots of limitations on designers and developers and doesn’t fit every brand. Lots of people avoided Material Design for this reason alone, so Google decided to add more possibilities of branding - more options for customization, more components and less restricted guidelines.
Besides that, this year Google offers components of Material Design to be drawn separately, which makes them available not only for Android but for iOS, Flutter and web apps as well.
Moreover, now Material Design integrates with Sketch. A plugin named Material Theming allows to utilize new components, customize them, rewrite their attributes and - most interestingly - share the project with everyone you want. Going on with the theme of integration, Google gives us Gallery that integrates with the whole G Suite - and doesn’t cost you a penny.
Finally, designers get more scope to act, thanks to many new components. They can be more creative, with much fewer limitations. At the same time, developers can also breathe a sigh of relief because implementing designers ideas is now easier than ever. So the designer can comprise a button cut into some extraordinary shape without breaking the rules of Material Design, and the developer can implement it without spending too much time and effort wondering how to do so. Pax et bonum.
You can get every piece of needed information, including guidelines and detailed description of use on each platform, on material.io website. The website is also a great piece of inspiration, with many material studies and examples of use. If you are involved in Material Design in any way, you should definitely check it out.
Overall - Material Design became more friendly, open and involving. Although some still find it limiting and too schematic to be considered a truly authorial work, no one can deny that Google is doing their best to make their design style as convenient as possible.
To bring future closer: Artificial Intelligence
Yeah, we get it, Google loves AI. And we don’t blame them, because we love it too. But this year, besides the expected, impressive show of AI-powered things, Google gave us the chance to do some Sci-Fi ourselves.
Basically, developers are getting access to the simplified model of Google’s machine learning, with features such as face, text, object and QR-code recognition, called ML Kit. You can easily integrate it with your own models thanks to newly launched TensorFlow Lite - an open source framework that enables on-device machine learning inference with low latency and a small binary size. TensorFlow itself isn’t fresh news - it existed before, but was too heavy for mobile devices. Its Lite version fixes that problem and is available for both Android and iOS.
Downside? You have to use Firebase in order to enjoy the ML Kit, but the advantages are more than tempting. Things like QR-code recognition used to be difficult to implement, but thanks to Google’s framework - it won’t be anymore.
Google is really widening the usage of machine learning in its products. Things like colouring black and white photos or recognizing faces on them are currently perfectly convenient and usable. You can count on your Google-powered device to help you arrange your wedding photos by guests’ presence - and even ask you if you want to share them with those particular people. For what it seems, Google is setting a whole new standard for our everyday devices intelligence.
Speaking of the intelligent devil - the famous Google Assistant, an intelligent voice assistant revealing two-way communication, is also finally being opened for developers in the form of Actions. Actions on Google is a developer platform that lets you create software that extends the functionality of Google Assistant. When you build an Action for the Assistant, you design your conversations for a variety of surfaces, which lets users get things done quickly and comfortably through either voice or visual affordances. According to developers, that function works really well and opens a lot of new possibilities for them.
To create a whole new experience: Augmented Reality
Augmented Reality, as its name suggests, brings elements of the virtual world and overlays them onto the real world, enhancing things we really experience with things someone developed for us to experience.
Almost a year ago, Google announced their ARCore framework, a platform dedicated to making AR creations. More and more devices are being implemented with it, which gives more and more users chance to experience the augmented reality. This year, they showed feature that recognizes a particular object and then starts to enhance it - for example, after recognizing a box of lego, application overlays minifigures running away from their package. It will probably find a lot of use in advertising, not to even mention it looks hella cool.
What is even more significant, Google also presented Cloud Anchors, which allow developers to create more collaborative AR experiences through the cloud. Basically, now you will be able to share what you see, hear or feel in Augmented Reality with others — whether you’re on Android or iOS. For what we have seen, it may simply serve an entertainment purpose - Google presented this feature on the example of a game - but the possibilities of utilizing this feature are endless.
For the other improvements introduced to the ARCore - Google released Sceneform, a new software development kit that helps Java developers implement AR scenes without a need to learn OpenGL. It is optimized for mobile devices and it seems like it will allow Android developers to create Augmented Reality without engaging other professionals. Overall, it is yet another feature that Google decided to open up and share.
To make apps lighter: AppBundle
Google also announced new publishing format called Android App Bundle. It’s a new publishing format (an important thing to notice: it can’t be installed directly on a device) for the Android applications - improved way to package your app. If an application has a part written in NDK (Native Development Kit), then - because of many platform CPU and lots of resolutions - the install file APK is very large. Therefore, until now, you could have either one very big file or have to build every configuration of APK separately, based on CPU (or resolution). That was very annoying. Moreover, every APK built this way had to have a unique name.
App bundle solves that problem because you build one, indirect file, and Google Play turns it into up to 900 combinations (because of each combination with CPU, resolution and language). You no longer need to build, sign, upload and manage code versions for multiple APK. Quoting the most reliable source, Android’s developer guideline, “Google Play’s new app serving model, called Dynamic Delivery, then uses your app bundle to generate and serve optimized APKs for each user’s device configuration, so they download only the code and resources they need to run your app”.
Results? Users get smaller, more optimized downloads. And as we know, the bigger the app, the smaller the chance to persuade the user to install it. Adding the fact that apps are getting more and more advanced, thus also significantly bigger, format optimization is a wise move.
Google also announced another novelty called dynamic features. What it does is basically allowing to make apps modular - users can then download and install your app’s dynamic features on demand. This feature can be useful when creating applications with a functionality that only a small percentage of users will utilize, however, creating those on-demand modules requires more time and effort, so you should be careful and think through deeply if it is something you really need.
To connect real & digital: New Android P
Last, but not least: the newest child of the Android family is going to grace us with its presence later this year. We don’t know its name yet, but we can be sure it will begin with “P” and will be delicious. After spicy Gingerbreads, gentle Marshmallows and over-the-top sweet Nougats, we are hearing rumours about refreshing Pineapple, as the password for the Google I/O Wi-Fi network sounded “p1n3appl3”. But maybe it was just Google trolling all of us.
Android P announces a few big changes, most significant of them being the change in Navigation Bar - now you will be able to navigate it through gestures. Going further, we have number of features connected with the theme of using your Android-powered device in smarter and more responsible way.
Firstly, we have Dashboard that tracks the amount of time you spend on your device. You can clearly see how much time you waste mindlessly scrolling through Facebook or watching funny cats on YouTube (although, in my honest opinion, this particular activity is anything but time being wasted). With certain apps, you can even receive recommendations to give yourself a break. You’ll also be able to set up screen time limits for specific applications.
Secondly, we have Wind Down Mode. You tell Google Assistant when you would like to go to bed, and it will automatically put your phone on Do Not Disturb mode and switch your screen to grayscale. You know that feeling when you are lying in bed late after midnight, promising yourself to get to sleep any minute, but instead, you pick up your smartphone and start scrolling through social media without any particular aim. Android wants to fight it - and make your smartphone less attractive for you by forcing a grayscale on your screen after you pass your standard sleeping time. Your device will not only remember when you like to go to sleep but also how bright you like your screen to be. Once it understands your preferences, it can then make the necessary adjustments depending on lighting conditions.
Another thing that will be personalized thanks to AI is suggesting apps based on different criteria. It will not only highlight most often selected apps but will make contextual suggestions based on your usage. What’s more? Of course - face detection, landmark recognition, text recognition, and a host of other recognition features, smarter accidental screen rotations prevention, improved battery and better security. The list goes on and on, but those are, what we can call, the most noticeable changes. The second Beta version is already available, and the final Android 9.0 P-something-sweet release is expected in August.
To inspire & provide tools
Google I/O 2018 was an impressive show of Artificial Intelligence and machine learning. Somewhere between being stunned how Google Assistant makes an appointment with the hairdresser without anyone noticing the difference and having fun with the emotion-recognition plants, we start to notice that soon those intelligent solutions will not be a cool thing to play with, but an integral part of our everyday life.
It’s significant that Google shares their tools and experience with other developers - it seems like pretty soon we will get used to devices and applications depending on machine learning, that those who don’t use them will lose its relevance. Better befriend the technology as soon as possible - Android developers got the set of tools to begin with. The rest depends on their creativity and innovativeness.