3 enjoyable experiments to attempt to your subsequent Android app, utilizing Google AI Studio



3 enjoyable experiments to attempt to your subsequent Android app, utilizing Google AI Studio

Posted by Paris Hsu – Product Supervisor, Android Studio

We shared an thrilling stay demo from the Developer Keynote at Google I/O 2024 the place Gemini remodeled a wireframe sketch of an app’s UI into Jetpack Compose code, immediately inside Android Studio. Whereas we’re nonetheless refining this characteristic to be sure to get a terrific expertise within Android Studio, it is constructed on prime of foundational Gemini capabilities which you’ll be able to experiment with right now in Google AI Studio.

Particularly, we’ll delve into:

    • Turning designs into UI code: Convert a easy picture of your app’s UI into working code.
    • Sensible UI fixes with Gemini: Obtain strategies on methods to enhance or repair your UI.
    • Integrating Gemini prompts in your app: Simplify advanced duties and streamline consumer experiences with tailor-made prompts.

Word: Google AI Studio gives numerous general-purpose Gemini fashions, whereas Android Studio makes use of a customized model of Gemini which has been particularly optimized for developer duties. Whereas because of this these general-purpose fashions could not supply the identical depth of Android information as Gemini in Android Studio, they supply a enjoyable and fascinating playground to experiment and achieve perception into the potential of AI in Android growth.

Experiment 1: Turning designs into UI code

First, to show designs into Compose UI code: Open the chat immediate part of Google AI Studio, add a picture of your app’s UI display (see instance under) and enter the next immediate:

“Act as an Android app developer. For the picture offered, use Jetpack Compose to construct the display in order that the Compose Preview is as near this picture as attainable. Additionally ensure that to incorporate imports and use Material3.”

Then, click on “run” to execute your question and see the generated code. You’ll be able to copy the generated output immediately into a brand new file in Android Studio.

Image uploaded: Designer mockup of an application's detail screen

Picture uploaded: Designer mockup of an software’s element display

Moving image showing a custom chat prompt being created from the imagev provided in Google AI Studio

Google AI Studio customized chat immediate: Picture → Compose

Moving image showing running the generated code in Android Studio

Working the generated code (with minor fixes) in Android Studio

With this experiment, Gemini was capable of infer particulars from the picture and generate corresponding code parts. For instance, the unique picture of the plant element display featured a “Care Directions” part with an expandable icon — Gemini’s generated code included an expandable card particularly for plant care directions, showcasing its contextual understanding and code era capabilities.

Experiment 2: Sensible UI fixes with Gemini in AI Studio

Impressed by “Circle to Search“, one other enjoyable experiment you may attempt is to “circle” downside areas on a screenshot, together with related Compose code context, and ask Gemini to counsel applicable code fixes.

You’ll be able to discover with this idea in Google AI Studio:

    1. Add Compose code and screenshot: Add the Compose code file for a UI display and a screenshot of its Compose Preview, with a pink define highlighting the problem—on this case, objects within the Backside Navigation Bar that ought to be evenly spaced.

Example: Preview with problem area highlighted

Instance: Preview with downside space highlighted

Screenshot of Google AI Studio: Smart UI Fixes with Gemini

Google AI Studio: Sensible UI Fixes with Gemini

Screenshot of Example: Generated code fixed by Gemini

Instance: Generated code fastened by Gemini

Example: Preview with fixes applied

Instance: Preview with fixes utilized

Experiment 3: Integrating Gemini prompts in your app

Gemini can streamline experimentation and growth of customized app options. Think about you wish to construct a characteristic that provides customers recipe concepts based mostly on a picture of the substances they’ve available. Prior to now, this may have concerned advanced duties like internet hosting a picture recognition library, coaching your individual ingredient-to-recipe mannequin, and managing the infrastructure to assist all of it.

Now, with Gemini, you may obtain this with a easy, tailor-made immediate. Let’s stroll via methods to add this “Prepare dinner Helper” characteristic into your Android app for example:

    1. Discover the Gemini immediate gallery: Uncover instance prompts or craft your individual. We’ll use the “Prepare dinner Helper” immediate.

Gemini prompt gallery in Google AI for Developers

Google AI for Builders: Immediate Gallery

    2. Open and experiment in Google AI Studio: Take a look at the immediate with completely different photos, settings, and fashions to make sure the mannequin responds as anticipated and the immediate aligns together with your objectives.

Moving image showing the Cook Helper prompt in Google AI for Developers

Google AI Studio: Prepare dinner Helper immediate

    3. Generate the mixing code: When you’re glad with the immediate’s efficiency, click on “Get code” and choose “Android (Kotlin)”. Copy the generated code snippet.

Screengrab of using 'Get code' to obtain a Kotlin snippet in Google AI Studio

Google AI Studio: get code – Android (Kotlin)

    4. Combine the Gemini API into Android Studio: Open your Android Studio venture. You’ll be able to both use the new Gemini API app template offered inside Android Studio or observe this tutorial. Paste the copied generated immediate code into your venture.

That is it – your app now has a functioning Prepare dinner Helper characteristic powered by Gemini. We encourage you to experiment with completely different instance prompts and even create your individual customized prompts to reinforce your Android app with highly effective Gemini options.

Our method on bringing AI to Android Studio

Whereas these experiments are promising, it is necessary to do not forget that massive language mannequin (LLM) know-how remains to be evolving, and we’re studying alongside the way in which. LLMs will be non-deterministic, which means they’ll generally produce surprising outcomes. That is why we’re taking a cautious and considerate method to integrating AI options into Android Studio.

Our philosophy in direction of AI in Android Studio is to enhance the developer and guarantee they continue to be “within the loop.” Specifically, when the AI is making strategies or writing code, we would like builders to have the ability to fastidiously audit the code earlier than checking it into manufacturing. That is why, for instance, the brand new Code Options characteristic in Canary routinely brings up a diff view for builders to preview how Gemini is proposing to change your code, fairly than blindly making use of the modifications immediately.

We wish to ensure that these options, like Gemini in Android Studio itself, are totally examined, dependable, and really helpful to builders earlier than we carry them into the IDE.

What’s subsequent?

We invite you to attempt these experiments and share your favourite prompts and examples with us utilizing the #AndroidGeminiEra tag on X and LinkedIn as we proceed to discover this thrilling frontier collectively. Additionally, ensure that to observe Android Developer on LinkedIn, Medium, YouTube, or X for extra updates! AI has the potential to revolutionize the way in which we construct Android apps, and we won’t wait to see what we are able to create collectively.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles