If you’ve ever wanted to write code once and run it across multiple platforms without dragging along a heavy runtime, Kotlin Native is worth a look.
It lets you take Kotlin beyond the JVM and compile it into real native binaries. That changes how your apps start, run, and scale across platforms.
Let’s break it down in a simple, practical way.
What Is Kotlin Native?
Kotlin Native is a technology that compiles Kotlin code directly into native machine code.
So instead of this:
Kotlin → bytecode → JVM → runs on device
You get this:
Kotlin → native binary → runs on OS
Normally, Kotlin runs on the JVM, where your code is compiled into bytecode and executed by the Java Virtual Machine.
Kotlin Native skips the JVM entirely. It uses LLVM to compile Kotlin into a standalone executable (like a .exe on Windows or a framework on iOS) that runs directly on the operating system.
This means there’s no JVM runtime or bytecode layer involved — just your compiled program running natively on the platform.
Why Kotlin Native Matters
Most cross-platform tools rely on some kind of bridge or runtime. Kotlin Native skips that.
Here’s what that gives you in practice:
Cross-platform without rewriting logic
You can reuse core logic across iOS, desktop, and other platforms, while still building native UIs.
No runtime dependency
Your app is compiled ahead of time. It runs as a standalone executable.
Faster startup
Since there’s no runtime to spin up, apps launch quickly.
Same Kotlin language
You’re still writing Kotlin. No need to switch mental models.
How Kotlin Native Works
Kotlin Native uses ahead-of-time (AOT) compilation. In the Kotlin Native ecosystem, the compiler handles things differently:
Backend: It uses LLVM, the same powerful technology used by languages like Swift and C++.
Interoperability: It “talks” natively to C, Objective-C, and Swift.
No Garbage Collector (Traditional): It uses a specialized memory management system designed to be efficient across different platforms.
In simple terms:
You write Kotlin code
The Kotlin Native compiler turns it into machine code
You get a platform-specific binary
That binary runs directly on the OS
No extra runtime involved.
A Simple Kotlin Example
Kotlin
funmain() {println("Hello, Kotlin Native!")}
main() is the entry point
println() prints to the console
When compiled with Kotlin Native, this becomes a native executable
Nothing special in the syntax. That’s the point.
Let’s See Some Code!
Working with Functions in Kotlin Native
To understand how Kotlin Native feels, let’s look at a simple example. Imagine we want a shared piece of code that says “Hello” but identifies which platform it’s running on.
The “Expect” and “Actual” Pattern
Kotlin uses a unique system to handle platform-specific features.
Kotlin
// This goes in the "Common" folderexpect fungetPlatformName(): Stringfungreet(): String {return"Hello from Kotlin Native on ${getPlatformName()}!"}
expect: This tells the compiler, “I promise I will provide the actual implementation for this function on every specific platform (iOS, Windows, etc.).”
Now, here is how the iOS-specific implementation might look:
Kotlin
// This goes in the "iosMain" folderimport platform.UIKit.UIDeviceactual fungetPlatformName(): String {return UIDevice.currentDevice.systemName() + " " + UIDevice.currentDevice.systemVersion}
actual: This is the real implementation.
Notice the import platform.UIKit.UIDevice? This is Kotlin Native talking directly to Apple’s UIKit! You are using Kotlin to access iOS system APIs.
Memory Management in Kotlin Native
This used to be one of the trickier parts of Kotlin Native. Older versions had strict rules around sharing objects between threads, requiring object freezing and making concurrency restrictive.
That’s changed. Kotlin Native now has a more relaxed memory model, allowing you to share data across threads more naturally without fighting the system.
It also uses a garbage collector, so you don’t need manual memory management like in C++.
It’s still not identical to JVM behavior, but it’s much easier to work with than before. While concurrency is more flexible now, you’re responsible for ensuring thread safety when working with shared mutable state.
For advanced scenarios like C interop, kotlinx.cinterop provides access to raw pointers—but this is rarely needed in typical development.
Where Kotlin Native Fits
You’ll rarely use Kotlin Native by itself. It’s usually part of Kotlin Multiplatform.
Typical use cases:
Sharing business logic between Android and iOS
Writing cross-platform libraries
Building lightweight backend tools
Working on embedded or edge devices
The main idea is simple: write logic once, reuse it where it makes sense.
Advantages of Kotlin Native
Fast startup
No runtime dependency
Smaller footprint
Can interop with C libraries
Good fit for performance-sensitive code
Limitations to Know
It’s not a silver bullet.
Ecosystem is smaller than JVM
Some libraries won’t work out of the box
Debugging can feel rough at times
Build times can be slow
Most of these are improving, but they’re still worth keeping in mind.
When Should You Use Kotlin Native?
Use it when:
You’re building a cross-platform app with shared logic
You need native performance
You’re targeting iOS alongside Android
Skip it if:
Your app is Android-only
You rely heavily on JVM-specific libraries
Getting Started
A simple way to begin:
Set up a Kotlin Multiplatform project
Add native targets (iOS, macOS, etc.)
Write shared Kotlin code
Compile using Kotlin Native
If you’re using IntelliJ IDEA, most of this is already streamlined.
Tips for Beginners
Start with small examples
Focus on shared logic first
Avoid pulling in too many dependencies early
Test on real targets when possible
Conclusion
Kotlin Native isn’t trying to replace everything — it fills a powerful gap. It lets you share logic across platforms while keeping native performance and experience.
If you already know Kotlin, expanding to iOS and desktop is more accessible than ever.
It’s time to think beyond Android development — and start thinking in terms of native, cross-platform efficiency.
If you’ve ever needed to use an existing C library in a Kotlin project, you’ve probably run into the gap between modern Kotlin and low-level native code. That’s where Kotlin Native CInterop comes in.
This guide breaks it down in a practical way. No fluff, just what you need to understand how it works and how to use it.
What is Kotlin Native CInterop?
Kotlin Native CInterop is a tool that lets you call C (and Objective-C) code directly from Kotlin/Native.
In simple terms:
You reuse existing C libraries
Kotlin generates bindings for you
You call native functions like regular Kotlin functions
It handles a lot of the heavy lifting, including type mapping and function access.
When Should You Use Kotlin Native CInterop?
Use it when:
You need system-level APIs written in C
You want to reuse a stable C library
You’re building with Kotlin Multiplatform
Performance matters and native code already exists
Common examples include crypto libraries, OS-level APIs, or legacy integrations.
How It Works (Quick Overview)
The workflow is straightforward:
Provide a C header file
Create a .def file
Kotlin generates bindings
Call the functions in Kotlin
Once set up, it feels surprisingly natural.
Step-by-Step Setup
1. Create a C Library
Kotlin
// math_utils.h#ifndef MATH_UTILS_H#define MATH_UTILS_Hint add(int a, int b);int multiply(int a, int b);#endif// math_utils.c#include "math_utils.h"int add(int a, int b) {return a + b;}int multiply(int a, int b) {return a * b;}
2. Create a Definition File
D
headers = math_utils.hcompilerOpts = -I.
Save it as: math.def
This tells Kotlin Native CInterop what to process.
When you start working with animations in Jetpack Compose, the spring() API feels intuitive at first—until you notice something odd: animations don’t seem to fully stop. They get very, very close to the target value… but technically never reach it.
That’s where visibilityThreshold quietly does some of the most important work.
This article walks you through what it is, why it matters, and how to use it correctly across different data types like Dp, Float, and Offset. Along the way, we’ll build real composable examples you can run and experiment with.
Why Spring Animations Need a Stopping Condition
Spring animations simulate real physics, Instead of moving linearly from point A to point B, they behave like a spring:
They move toward the target
Overshoot (depending on damping)
Oscillate
Gradually settle
From a math perspective, they never fully stop — they just get closer over time.
In UI, that’s not useful. We need a clear stopping point. Compose handles this using a threshold.
What is visibilityThreshold?
visibilityThreshold defines the minimum difference between the current animated value and the target value at which the animation is considered finished.
In simple terms:
If the remaining distance is smaller than the threshold → animation stops
If not → animation continues
How It Works for Different Types
Different types need different thresholds because “closeness” depends on the unit.
Animations in the old View system were a lot of ceremony. You’d set up an ObjectAnimator, attach a listener, call start(), remember to cancel on detach, and hope nothing leaked. For something as simple as fading a view, it felt disproportionate.
Compose takes a different approach. Instead of imperative animation commands, you describe what you want the UI to look like for a given state — and the animate*AsState family handles the transition automatically. No start/cancel lifecycle. No listeners unless you need them.
What Is animate*AsState?
animate*AsState is a group of composable functions that smoothly animate a value whenever its target changes. Feed it a target, and it produces a frame-by-frame animated value you can plug directly into your UI.
The * is a wildcard — there’s a variant for each value type you’re likely to animate:
They all follow the same pattern, so once you’ve used one, the others are trivial.
The Mental Model
The key shift from the View system: you don’t start animations. You change state.
State changes → animate*AsState detects the new target → interpolates toward it each frame
When isExpanded flips from false to true, you don’t tell anything to animate. You just update state, and the animated value catches up on its own. If the state changes again mid-flight, the animation redirects smoothly from wherever it currently is.
This is different from ValueAnimator, which needs explicit start/cancel calls and doesn’t know about your UI state at all.
isVisible is boolean. When it toggles, animateFloatAsState picks up the new target and eases alpha toward it over several frames. Each frame triggers a recomposition, which re-reads the updated alpha — that’s the full animation loop.
The label parameter is optional, but set it anyway. It appears in the Android Studio Animation Inspector and makes debugging significantly less painful.
animateColorAsState: Transitioning Colors
Color is one of the more visually rewarding things to animate because even a 300ms cross-fade reads as deliberate and polished.
Both the background and text colors animate in sync. You don’t coordinate them — they just share the same state source (isActive), so they naturally stay in step.
The animationSpec = tween(durationMillis = 500) is where you control how the animation plays out. More on that below.
animateDpAsState: Expandable Cards
animateDpAsState works on any Dp value — height, width, padding, corner radius. A common use case is an expandable card:
Kotlin
@ComposablefunExpandableCardExample() {var isExpanded byremember { mutableStateOf(false) }val cardHeight byanimateDpAsState( targetValue = if (isExpanded) 200.dp else80.dp, animationSpec = spring( dampingRatio = Spring.DampingRatioMediumBouncy, stiffness = Spring.StiffnessLow ), label = "card height" )Card( modifier = Modifier .fillMaxWidth() .height(cardHeight) .clickable { isExpanded = !isExpanded }, elevation = CardDefaults.cardElevation(defaultElevation = 4.dp) ) {Column(modifier = Modifier.padding(16.dp)) {Text( text = "Tap to ${if (isExpanded) "collapse"else"expand"}", fontWeight = FontWeight.Bold, fontSize = 16.sp )if (isExpanded) {Spacer(modifier = Modifier.height(12.dp))Text( text = "The card height is driven by animateDpAsState. " +"The spring spec adds a slight overshoot on open.", fontSize = 14.sp, color = Color.Gray ) } } }}
Using spring instead of tween here adds a small overshoot when the card opens — the physics-based easing makes it feel more physical than a plain duration curve.
Animation Specs
animationSpec controls the character of the animation. There are three you’ll reach for regularly.
Pick tween when you need precise timing — UI tests, coordinated sequences, or matching a transition to an audio cue.
Common easing options:
FastOutSlowInEasing — decelerates into the final position (good for elements entering the screen)
LinearOutSlowInEasing — starts at constant speed, slows at the end (good for exits)
FastOutLinearInEasing — accelerates throughout (for emphasis)
EaseInOut — smooth on both ends, feels the most natural
LinearEasing — constant speed; fine for loaders, rarely right for UI transitions
spring — Physics-Based
Kotlin
val offsetX byanimateFloatAsState( targetValue = targetPosition, animationSpec = spring( dampingRatio = Spring.DampingRatioMediumBouncy, // How much it overshoots stiffness = Spring.StiffnessLow // How fast it moves ), label = "offset")
spring doesn’t have a fixed duration — it settles based on physics. The two parameters to tune:
Damping ratio (controls overshoot):
NoBouncy(1f) — glides in cleanly, no overshoot
LowBouncy(0.75f) — barely noticeable bounce
MediumBouncy(0.5f) — clear bounce, works well for cards and buttons
HighBouncy(0.2f) — exaggerated overshoot, use it deliberately
Stiffness (controls speed):
VeryLow— slow, floaty
Low — relaxed
Medium — balanced default
High — snappy
VeryHigh — nearly instant
keyframes — Custom Intermediate Values
Kotlin
val scale byanimateFloatAsState( targetValue = if (isPressed) 0.9felse1f, animationSpec = keyframes { durationMillis = 3001f at 0// frame 0ms1.1f at 100// slightly overshoots at 100ms0.95f at 200// settles low1f at 300// lands at resting scale }, label = "press scale")
Use keyframes for custom press effects or anything where you need control over intermediate values. It’s more verbose, but it gives you exact control over the curve at each timestamp.
Combining Multiple Animations: Like Button
Each animate*AsState call handles exactly one value. When you need several properties to animate at once, you just stack them. They all read from the same state and run concurrently without any coordination code.
Banner notifications, bottom bars, toast-style messages — offset animation handles all of these. Start the composable off-screen and animate it into position.
When isVisible becomes true, the banner animates from its current offset to 0.dp, sliding down into view with a slight bounce. When set to false, it animates back to -80.dp, sliding out upward.
Reacting When an Animation Finishes
If you need to trigger something after the animation settles — navigate, update a flag, kick off the next step — use finishedListener:
finishedListener fires once, with the final settled value. It does not fire on every frame — that’s what makes it safe to use for side effects.
Performance: Use graphicsLayer for Visual Transforms
For scale, alpha, and rotation, avoid stacking individual modifiers. Batch them in a single graphicsLayer block:
Kotlin
// Prefer this — all transforms applied in one pass on the render threadModifier.graphicsLayer { scaleX = scale scaleY = scale alpha = alpha rotationZ = rotation}// Avoid this for pure visual propertiesModifier .scale(scale) .alpha(alpha) .rotate(rotation)
graphicsLayer applies visual transformations during the draw phase, avoiding layout changes and reducing the cost of recomposition for purely visual updates. This makes it especially efficient for animations like alpha, translation, and scale—particularly in lists or frequently updated UI.
Keep targetValue Simple
If the logic for computing your target value is complex, extract it before passing it in:
Kotlin
// Fineval scale byanimateFloatAsState( targetValue = if (isExpanded) 1.2felse1f, label = "scale")// Better to extract first than inline a big when blockval targetScale = when { isExpanded && isSelected ->1.3f isExpanded ->1.15f isSelected ->1.05felse->1f}val scale byanimateFloatAsState(targetValue = targetScale, label = "scale")
When Not to Use animate*AsState
animate*AsState is the right tool when you’re animating a single value in response to a state flip. Reach for something else when:
You’re animating multiple values that need to stay in sync as a unit → updateTransition
You need an infinitely repeating animation → rememberInfiniteTransition
You’re tracking a pointer/drag gesture and need manual control → Animatable
The composable is entering or leaving the composition → AnimatedVisibility
The last one trips people up most often. animate*AsState can only animate a composable that’s already in the tree. If you’re using if (condition) { MyComposable() } and condition becomes false, MyComposable is gone — there’s nothing left to animate. Wrap it in AnimatedVisibility instead.
Full Example: Custom Animated Toggle
Here’s everything from this article working together in a single component — a toggle row with animated track color, thumb position, and a subtle press scale:
Three animate*AsState calls, no third-party library, no animation framework — just state and a handful of composable functions.
Common Mistakes
Animating inside a LazyColumn without stable keys. Each item gets its own animation instance, which is correct — but if your remember isn’t keyed to the item’s identity, Compose may reuse the state for a different item when the list scrolls. Always key your remember calls to something stable and unique per item.
Expecting finishedListener to fire on every frame. It fires once, when the animation settles. If you want per-frame callbacks, you need Animatable with a custom coroutine loop.
Using animate*AsState for enter/exit animations. When a composable leaves the composition, it’s gone — animate*AsState has nothing to animate. Use AnimatedVisibility for any case where the composable needs to animate out before being removed.
Conclusion
animate*AsState in Jetpack Compose is one of those APIs you end up using all the time.
It keeps animation logic simple and close to your UI state, which is exactly how Compose is meant to work.
Start with small interactions. Once you get comfortable, you’ll naturally move to more advanced animation APIs when needed.
If you’ve been building UI with Jetpack Compose for more than a week, you already know the drill. You write a clean Composable — maybe a PrimaryButton, or a UserProfileCard — and then you immediately have to write something like this below it:
But think about how many Composables you build across a full project. Ten screens, each with five or six components — that’s fifty-plus preview functions you’re hand-typing (or copy-pasting and then refactoring). The mental cost isn’t the typing itself; it’s the context switching. You just nailed your component logic, and now you have to stop, copy the function name, jump below, paste, rename, and restructure. It breaks the creative flow completely.
The good news: Android Studio already has a built-in system designed exactly for this. You just need to set it up once.
Quick answer for the impatient: Go to Settings → Editor → Live Templates, create a new template with the abbreviation prev (or use the one already available in Android Studio), and paste the template code from the next section. That’s it. The rest of this post goes deeper into why and when to use each approach.
Live Templates — The Fastest Fix You’re Not Using
Live Templates are one of Android Studio’s most underused features. They’re essentially smart text snippets that expand when you type a short abbreviation and press Tab. IntelliJ has had them for years — Android Studio inherits them from the same codebase. Kotlin developers who come from other editors sometimes have no idea this exists.
For @Preview specifically, the goal is: you type prev, hit Tab, and the entire preview scaffold appears with your cursor already positioned inside it, ready to fill in any needed parameters.
Setting Up Your First Preview Live Template
Here’s the exact step-by-step process — no skipping ahead:
1. Open Settings
On macOS press ⌘ ,.
On Windows/Linux go to File → Settings.
In the search bar, type “Live Templates” to jump straight to it.
2. Create a new Template Group
In the Live Templates panel, click the + button on the right → select Template Group→ name it something like Compose. This keeps things organized and separate from Android Studio’s built-in templates.
3. Add a new Live Template
With your new Compose group selected, click + again → this time select Live Template.
4. Fill in the abbreviation and description
Set Abbreviation to prev and give it a Description like “Compose @Preview function”. The description shows up in autocomplete hints, so keep it readable.
5. Paste the template text
This is the core part. Paste the code below exactly as shown — the $VARIABLE$ syntax is how Android Studio knows where to place your cursor and what to ask you to fill in.
The $COMPOSABLE_NAME$ variable is smart — when you tab into the template, Android Studio highlights every occurrence at once. Type the name once and it fills in both places simultaneously (the function name and the call inside it). The $END$ marker tells the editor where to park your cursor after you’re done naming — right inside the parentheses where you’d add parameters.
If you want, you can wrap it in your existing app theme or a Material theme like this:
At the bottom of the template editor, click Define and check Kotlin. Without this step, the template won’t activate inside .kt files.
7. Hit OK and test it
Open any Kotlin file, type prev, and press Tab. The scaffold should appear.
What It Looks Like In Practice
Before & After
Kotlin
// You write this composable first:@ComposablefunUserProfileCard( name: String, avatarUrl: String, isOnline: Boolean) {// ... your component logic}// Then type "prev" + Tab, type "UserProfileCard", Tab again:@Preview(showBackground = true)@ComposablefunUserProfileCardPreview() {UserProfileCard(// ← cursor lands here, ready for sample data )}
One thing to remember: The template inserts the composable name as a plain text call. If your Composable has required parameters, Android Studio won’t auto-fill them — you’ll need to add sample data yourself. That’s expected and by design; previews should use meaningful placeholder values, not auto-generated garbage.
Multi-Variant Previews: Light, Dark, and Different Screen Sizes
A single light-mode preview is fine for early development. But before you ship anything, you want to see your component in at least two states: light theme and dark theme. You might also want to check it on a compact phone screen vs a larger device. Doing this manually every time is even more tedious than writing a basic preview.
There are two solid ways to handle this in Compose. The cleaner of the two is a custom annotation that stacks multiple @Preview declarations — this keeps your composable files lean and consistent across your entire project.
Approach A: Custom Multi-Preview Annotation
Create a single annotation class in a shared file (something like PreviewAnnotations.kt in your UI module’s utils package):
Between these two approaches, the custom annotation (Approach A) is better for teams and larger projects. It lives in source control, everyone uses the same preview config automatically, and updating it once updates every preview across the codebase. Live Templates are per-developer and per-machine — great for solo work, less ideal for shared codebases.
Note: Instead of defining custom @DevicePreviews like above, you can use the built-in@PreviewScreenSizes.
File Templates: Scaffold Both the Composable and Preview at Once
Live Templates solve the in-file boilerplate problem. But what if your workflow always starts with creating a new file? If you find yourself doing File → New → Kotlin File/Class and then manually typing the @Composable and @Preview blocks from scratch, File Templates take this even further.
A File Template is a pre-defined structure that Android Studio uses when you create a new file through the right-click menu. You can define your own and make “New Composable File” a real option.
1. Open Settings → File and Code Templates
Navigate to Editor → File and Code Templates. You’ll see the default list of templates on the left (Kotlin File, Interface, Class, etc.).
2. Click + to create a new template
Name it something like Composable, set the extension to kt.
3. Paste the template body
Use the $NAME variable — Android Studio prompts the user to fill this in when they create the file.
After saving this, right-clicking any package in your project tree will show New → Composable in the menu. You type the component name once, and you get a file with proper package declaration, imports, a blank composable, and its preview — all ready to go.
Which Approach Should You Actually Use?
The honest answer: it depends on your context. Here’s a clear breakdown to help you decide without overthinking it.
For most individual developers: start with a Live Template. It takes five minutes, pays off immediately, and you don’t need to touch it again. If you’re working on a team or a long-lived codebase, invest the extra ten minutes to set up a custom @DevicePreviews annotation and commit it to the repo. That way the entire team benefits without any individual setup.
Conclusion
The friction of writing @Preview boilerplate is real, but it’s entirely self-imposed. Android Studio has the tools to make this near-instant — you just need to spend fifteen minutes setting them up once. A Live Template handles the basic case in two keystrokes. A custom annotation handles multi-variant previews for teams. File Templates handle the new-file workflow.
Pick the one that fits your current workflow and set it up today. The next time you build a Composable, you’ll feel the difference immediately.
Animations are one of those things that feel easy until you actually try to wire them into a real screen. You start with a simple fade or size change, and suddenly you’re juggling state, re-composition, and timing issues that don’t behave the way you expected.
I ran into this while building a product listing screen. Small interactions like expanding cards, animating filters, and handling loading states quickly became messy. That’s when the Jetpack Compose Animation System started to make sense — not as a set of APIs, but as a model tied directly to state.
This post breaks that down in a practical way.
What the Jetpack Compose Animation System Actually Is
The core idea is straightforward:
Your UI depends on state, and animations happen when that state changes.
You don’t trigger animations manually. You describe what the UI should look like for a given state, and Compose handles the transition.
Instead of writing something like “start animation on click”, you write:
if expanded → height = 200dp
if collapsed → height = 100dp
When the state changes, Compose animates between those values.
Understanding this mental model will make everything else click. In Compose, your UI is a function of state:
UI = f(state) — When state changes, Compose re-renders the UI. Animations are just a smooth interpolation between two states over time. You don’t “run” an animation — you change state and tell Compose how to animate the transition.
The animation system in Compose has three layers, and it’s worth knowing which layer you’re working at:
Layer 1 — High-level APIs:AnimatedVisibility, AnimatedContent, Crossfade. These handle the most common cases with zero configuration needed.
Layer 2 — Value-based APIs:animate*AsState, updateTransition, InfiniteTransition. These animate specific values (Float, Dp, Color, etc.) that you then apply in your composables.
Layer 3 — Low-level APIs:Animatable, coroutine-based. Full manual control for complex sequencing, interruptions, or physics-based motion.
The golden rule: start at the highest level that solves your problem. Only go deeper when you genuinely need more control. Most production animations live happily in layers 1 and 2.
The Core Building Blocks
Before writing any animations, it helps to understand the main APIs you’ll actually use:
1. animate*AsState
For simple, one-off animations tied to a single value.
2. updateTransition
For animating multiple values based on the same state.
3. AnimatedVisibility
For showing and hiding composables with animation.
4. AnimatedContent
For switching between UI states.
5. rememberInfiniteTransition
For looping animations.
You don’t need all of them at once. Most real screens use 1–2 of these consistently.
Why This Model Works Well
Once you lean into this approach, a few things improve right away:
No need to manage animation lifecycle
No manual cancellation logic
UI stays consistent with state
Less glue code
This becomes especially useful when multiple properties change together. You don’t coordinate them manually. You just describe the end result.
Your First Real Animation
Let’s build something practical: a card that expands when clicked.
The Jetpack Compose Animation System feels strange at first because it flips the mental model. You’re not telling the UI how to animate. You’re describing how it should look in different states.
Once that clicks, animations become predictable.
Start small:
Animate size
Then color
Then combine them
After a few screens, you’ll stop thinking about “animations” entirely and just think in terms of state transitions.
Most Android performance improvements land as a framework update or a new API. This one is different. Starting with Android 15, Google added support for a 16 KB page size on ARM64 devices — and with Android 16, it’s becoming a hard requirement for apps that target new hardware.
If you haven’t looked into this yet, now is a good time. Apps that ship 4 KB-aligned native libraries will fail to load on 16 KB page-size devices. The failure isn’t graceful — it’s an UnsatisfiedLinkError and a crash.
This guide covers what the change is, which apps are affected, how to check your own APK, and what to actually do about it.
Memory Pages: A Quick Refresher
The OS doesn’t allocate memory one byte at a time — it works in fixed-size blocks called pages. For decades, Android (like most Linux systems) used a 4 KB page size. That made sense when RAM was limited and apps were simpler.
Modern flagship devices are a different story. They have multiple gigabytes of RAM, 64-bit ARM processors, and apps that load dozens of native libraries at startup. Managing all of that in 4 KB chunks means more page table entries, more TLB pressure, and more overhead on every app launch.
16 KB pages reduce that overhead. The OS manages fewer, larger chunks — fewer page faults at startup, fewer TLB misses during execution, and less kernel bookkeeping overall.
Why Google Made the Change
The performance case is real:
Faster cold starts. Fewer pages need to be mapped during app startup. Google’s benchmarks showed cold launch improvements of up to 30% on devices running a 16 KB page-size kernel.
Better TLB efficiency. The TLB (Translation Lookaside Buffer) is a small hardware cache that maps virtual addresses to physical memory. With 16 KB pages, each TLB entry covers four times more memory, which means fewer misses on cache-heavy operations.
Less kernel overhead. Fewer pages means a smaller page table. The kernel spends less time on memory management and more time running your code.
Industry alignment. Apple has used 16 KB pages on ARM devices for years. The mainline Linux kernel has progressively added support too. Android isn’t ahead of the curve here — it’s catching up.
Where Things Stand in 2026
Android 15 introduced 16 KB page size support in the emulator so developers could start testing.
Android 16 is expected to require 16 KB compliance for apps targeting API 36 on supported hardware.
Pixel 9 and later are expected to ship with kernels configured for 16 KB pages.
Play Console already shows warnings for apps that bundle 4 KB-aligned .so files when targeting API 35+.
The install base of 16 KB devices is still small, but it will grow quickly as new flagships ship. Getting ahead of this now is much easier than scrambling when Play starts rejecting updates.
Does This Affect Your App?
It depends entirely on whether your app includes native code.
Pure Kotlin or Java apps
You’re largely fine. The Android Runtime handles .dex alignment automatically, so managed code isn’t affected. The one thing to watch is third-party SDKs — they sometimes bundle native .so files you didn’t write and may not have checked.
Apps with NDK or native libraries
This is where the requirement has real teeth. If your app includes:
Native libraries (.so files) built with the NDK
Pre-built .so files from third-party SDKs
A game engine like Unity or Cocos2d
Audio, video, or image processing libraries with native bindings
…then every one of those .so files needs to be compiled with 16 KB-aligned ELF segments. If any aren’t, the OS on a 16 KB device will refuse to load them.
Check Your APK
Before touching any build config, find out where you actually stand.
Use readelf on your .so files
Bash
# Unzip the APKunzipyour-app.apk-dapp-contents# Inspect a native libraryreadelf-lapp-contents/lib/arm64-v8a/libyourlibrary.so | grepLOAD
Look at the alignment column on the right side of each LOAD segment line:
0x4000 = 16384 bytes = 16 KB compliant
0x1000 = 4096 bytes = 4 KB needs recompiling
Compliant output:
LOAD 0x000000 ... 0x001abc 0x001abc R 0x4000 LOAD 0x002000 ... 0x005def 0x005def R E 0x4000
Non-compliant output:
LOAD 0x000000 ... 0x001abc 0x001abc R 0x1000
Do this for every .so in the APK, not just the ones you wrote. Third-party libraries need to pass too.
Run AGP’s built-in lint check
Android Gradle Plugin 8.5+ includes a lint check specifically for this. Run:
./gradlew lint
Look for warnings tagged PageSizeAlignment. They’ll call out each non-compliant library by name.
Fix Your Own Native Libraries
If you maintain native code with the NDK, the fix is a single linker flag.
With CMake
CMake
# CMakeLists.txtcmake_minimum_required(VERSION 3.22.1)project(MyNativeLib)add_library( mynativelib SHARED src/main/cpp/mynativelib.cpp)# Tell the linker to align ELF LOAD segments to 16 KB boundariestarget_link_options(mynativelib PRIVATE "-Wl,-z,max-page-size=16384")find_library(log-lib log)target_link_libraries( mynativelib ${log-lib})
The flag -Wl,-z,max-page-size=16384 passes max-page-size=16384 directly to the linker. It sets the alignment of every LOAD segment in the output .so to 16 KB. That’s all the change requires on your end.
After rebuilding, re-run the readelf check to confirm the alignment value changed from 0x1000 to 0x4000.
One thing worth knowing: a 16 KB-aligned .so runs fine on 4 KB devices too. The extra alignment padding is harmless on older hardware. You don’t need separate builds — one .so covers both.
Kotlin: What You Need to Handle
Kotlin doesn’t control ELF alignment, but there are places where Kotlin code loads native libraries and should handle failures gracefully.
Safe native library loading
System.loadLibrary() throws UnsatisfiedLinkError if a .so fails to load — which on a 16 KB device usually means the library isn’t aligned. Without handling this, the app just crashes.
Kotlin
// NativeLibraryLoader.ktobjectNativeLibraryLoader {privateconstval TAG = "NativeLibraryLoader"/** * Loads a native library and returns false (instead of crashing) * if it fails. On 16 KB page-size devices, an UnsatisfiedLinkError * usually means the .so wasn't compiled with max-page-size=16384. */funloadSafely(libraryName: String): Boolean {returntry { System.loadLibrary(libraryName) Log.d(TAG, "Loaded: lib$libraryName.so")true } catch (e: UnsatisfiedLinkError) { Log.e( TAG,"Failed to load lib$libraryName.so — possible 16 KB alignment issue. " +"Recompile with: -Wl,-z,max-page-size=16384", e )false } catch (e: SecurityException) { Log.e(TAG, "Security exception loading lib$libraryName.so", e)false } }}
Use it in your Activity or Application:
Kotlin
// MainActivity.ktclassMainActivity : AppCompatActivity() {overridefunonCreate(savedInstanceState: Bundle?) {super.onCreate(savedInstanceState)setContentView(R.layout.activity_main)val loaded = NativeLibraryLoader.loadSafely("mynativelib")if (!loaded) {showCompatibilityError() } }privatefunshowCompatibilityError() { AlertDialog.Builder(this) .setTitle("Compatibility Issue") .setMessage("A required component couldn't load on this device. " +"Try updating the app to get the latest compatibility fixes." ) .setPositiveButton("OK", null) .show() }}
This avoids a crash and gives the user a message they can actually act on, instead of a silent ANR.
Detecting page size at runtime
Sometimes you need to know which page size the device is using — for example, to decide whether to enable a feature backed by a library you haven’t fully audited yet.
Kotlin
// PageSizeDetector.ktimport android.system.Osimport android.system.OsConstants/** * Reads the system page size at runtime using the POSIX sysconf API. * Returns 4096 on standard devices, 16384 on 16 KB page-size devices. */objectPageSizeDetector {fungetPageSizeInBytes(): Long {return Os.sysconf(OsConstants._SC_PAGESIZE) }funis16KBPageSize(): Boolean {returngetPageSizeInBytes() == 16384L }fundescription(): String {returnwhen (getPageSizeInBytes()) {4096L->"4 KB"16384L->"16 KB"else->"${getPageSizeInBytes()} bytes (unknown)" } }}
Log it at startup — it takes one line and has saved debugging time more than once when a crash report comes in from an unfamiliar device:
When you see a crash log from a device you can’t reproduce locally, the page size entry tells you whether you’re looking at an alignment problem or something else entirely.
Auditing bundled native libraries at debug time
This helper scans your app’s native library directory and lists every .so it finds. It won’t tell you the alignment directly (use readelf for that), but it gives you a complete list to work through — which matters when you’re auditing a project with a lot of dependencies.
Kotlin
// SdkCompatibilityChecker.ktimport java.io.File/** * Lists all native libraries bundled in the APK at runtime. * Run this in debug builds to build your audit list. * For actual alignment verification, use readelf on each file. */objectSdkCompatibilityChecker {privateconstval TAG = "SdkCompatibilityChecker"funfindNativeLibraries(context: android.content.Context): List<String> {val nativeLibDir = File(context.applicationInfo.nativeLibraryDir)if (!nativeLibDir.exists() || !nativeLibDir.isDirectory) { Log.w(TAG, "No native library directory found.")returnemptyList() }return nativeLibDir .listFiles { file -> file.name.endsWith(".so") } ?.map { it.name } ?: emptyList() }funauditAndLog(context: android.content.Context) {val libs = findNativeLibraries(context)if (libs.isEmpty()) { Log.i(TAG, "No native libraries found.")return } Log.w(TAG, "Found ${libs.size} native libraries — verify each with readelf:") libs.forEach { Log.w(TAG, " -> $it") } }}
Wire it into your Application class behind a BuildConfig.DEBUG check:
Every debug run now logs a full list of native libraries. Paste it into a spreadsheet, mark which ones you own, and track the audit from there.
Test on a 16 KB Emulator
You don’t need a physical device for this. Android Studio ships with 16 KB emulator images.
Create the emulator
Open Android Studio → Device Manager
Click Create Device
Pick a Pixel 8 or later hardware profile
On the system image screen, select an image labelled “16k page size” (available for API 35 and API 36)
Finish the setup and start the emulator
Confirm it’s configured correctly
adb shell getconf PAGE_SIZE
16384 means you’re on a 16 KB device. 4096 means something went wrong with the AVD setup.
What to watch for when running your app
Crash on launch → a native library failed to load; check Logcat for the library name
UnsatisfiedLinkError in Logcat → that specific .so is 4 KB aligned
App runs normally → you’re compliant
Dealing With Third-Party Libraries You Can’t Recompile
Your code might be clean, but one of your dependencies is shipping a 4 KB-aligned .so that you have no control over.
Option 1 — Contact the vendor. File a GitHub issue or support ticket referencing the Android 16 KB page size requirement. Most major SDKs (Firebase, Google Play Services, Crashlytics) are already compliant. Smaller or older SDKs may need a nudge.
Option 2 — Gate the feature at runtime. While you wait for the vendor to ship a fix, use PageSizeDetector to disable the feature on affected devices:
Kotlin
// FeatureManager.ktobjectFeatureManager {/** * Returns false on 16 KB page-size devices if the underlying * native library hasn't been verified as compliant yet. * Flip this to true once your SDK vendor ships a fix. */funisNativeFeatureEnabled(): Boolean {if (PageSizeDetector.is16KBPageSize()) { Log.w("FeatureManager", "Skipping native feature on 16 KB device — awaiting SDK update.")returnfalse }returntrue }}
Option 3 — Write a Kotlin fallback. For features where a fallback is feasible, have two paths: the native implementation for standard devices, and a pure Kotlin path for 16 KB devices until the library is updated.
Kotlin
// ImageProcessor.ktclassImageProcessor {/** * Uses the fast native path on verified devices, falls back to * Kotlin on 16 KB page-size devices until the native library is updated. */funprocessImage(bitmap: android.graphics.Bitmap): android.graphics.Bitmap {returnif (FeatureManager.isNativeFeatureEnabled()) {processImageNative(bitmap) // C++ via JNI } else {processImageKotlin(bitmap) // Pure Kotlin fallback } }privateexternalfunprocessImageNative( bitmap: android.graphics.Bitmap ): android.graphics.BitmapprivatefunprocessImageKotlin( bitmap: android.graphics.Bitmap ): android.graphics.Bitmap {val copy = bitmap.copy(bitmap.config, true)// apply transformationsreturn copy }}
This keeps the app working on all devices. The Kotlin path is slower, but it beats a crash.
Google Play Requirements
Play Console already flags 4 KB-aligned libraries as warnings when you target API 34 or lower. However, for API 35 (Android 15) and above, 16 KB compliance is now mandatory for all new apps and updates. While Google initially allowed extensions, as of 2026, non-compliant apps with native code will face immediate rejection during the upload process.
Check Play Console → Release → App bundle explorer → [Select Version] → Supported page sizes for any warnings or “Not Supported” labels regarding native library alignment. Deal with them immediately to ensure your releases are not blocked.
Real Performance Numbers
The gains are genuine but not uniform across all app types:
Metric
4 KB Pages
16 KB Pages
Cold app launch
Baseline
Up to 30% faster
TLB miss rate
Higher
Lower
Kernel page table size
Larger
Smaller
Memory fragmentation
More
Less
App RAM footprint
Baseline
Marginally higher
The trade-off: small allocations get rounded up to the next 16 KB boundary, so there’s a slight increase in memory usage. For most apps it’s a few hundred KB at most — well worth the startup speed improvement.
Migration Checklist
Kotlin
Android 16 KB Page Size - Pre-ship Checklist=============================================□ Unzipped APK and located all .so files under lib/arm64-v8a/□ Ran readelf -l on each .so - confirmed LOAD alignment is0x4000□ Added -Wl,-z,max-page-size=16384 to CMakeLists.txt or Android.mk□ Rebuilt native libraries - re-verified alignment with readelf□ Audited all third-party .so files - opened tickets with non-compliant vendors□ Added NativeLibraryLoader with UnsatisfiedLinkError handling□ Added PageSizeDetector and logging to Application.onCreate()□ Added SdkCompatibilityChecker to debug builds□ Created a 16k page size AVD in Android Studio□ Ran the app on the 16k emulator - no crashes, no UnsatisfiedLinkError□ Ran ./gradlew lint - no PageSizeAlignment warnings□ Checked Play Console - no native library alignment warnings
FAQ
Does this affect all Android devices right now?
No. The 16 KB page size requires specific kernel and hardware support. Older devices will keep using 4 KB pages. But as Pixel 9 and later devices ship with 16 KB kernels, the affected install base will grow steadily.
My app is pure Kotlin with no NDK. Do I need to do anything?
Probably not. ART handles alignment for managed code automatically. Just double-check your Gradle dependencies for any SDKs that bundle .so files — those are the only risk for a pure Kotlin app.
Will a 4 KB-aligned .so actually crash the app?
Yes. On a 16 KB page-size device, System.loadLibrary() will throw UnsatisfiedLinkError if the .so isn’t properly aligned. That’s an app crash unless you catch it.
Can one .so file work on both 4 KB and 16 KB devices?
Yes. A library compiled with -Wl,-z,max-page-size=16384 works fine on 4 KB devices — the extra alignment is just padding that gets ignored. You don’t need separate builds for different page sizes.
What about Unity?
Unity generates native .so files, so yes, it’s affected. Unity has been shipping fixes in recent LTS versions. Make sure you’re on an up-to-date Unity LTS release and rebuild your project after upgrading.
Conclusion
The Android 16 KB page size change is the kind of requirement that’s easy to ignore until it starts causing crashes on new hardware. The fix is straightforward if you own your native code — it’s one linker flag and a rebuild. The harder work is tracking down third-party SDKs that haven’t updated yet and building a plan for those.
Start by running the readelf check on your APK today. If everything comes back as 0x4000, you’re done. If not, the checklist above has every step you need.
If you’ve worked with Android long enough, you already know this: emulator performance isn’t just about speed, it’s about consistency.
A fast emulator that behaves unpredictably is worse than a slightly slower one that’s stable.
This guide focuses on Android Emulator Settings that hold up in real-world development. Not just for solo projects, but for teams, CI pipelines, and production-grade workflows.
How to Think About Emulator Performance
Before changing settings, it helps to understand what actually impacts emulator performance.
There are three main bottlenecks:
CPU virtualization overhead
Memory pressure (host + emulator)
GPU rendering pipeline
Most “tuning tips” online ignore this and suggest arbitrary numbers. In practice, performance tuning should be constraint-driven, not guesswork.
Core Android Emulator Settings That Make a Difference
Let’s go through the settings that consistently make a difference.
CPU Allocation: Less Is Often More
A common mistake is over-allocating CPU cores.
What works in practice:
2 cores → stable baseline (recommended for most cases)
3–4 cores → only if profiling shows CPU bottlenecks
Why this matters: The emulator runs inside a virtualized environment. Giving it too many cores can increase context switching and hurt overall system responsiveness.
Rule of thumb: If your host machine slows down, your emulator will too.
RAM Allocation: Avoid Starving the Host
This is where people usually overdo it.
Start with 2–4 GB
Increase only if you see real issues (UI lag, memory errors)
Giving the emulator too much RAM can slow down everything else on your system, which ends up hurting performance overall.
VM Heap Size
This one gets confused with RAM, but it’s not the same thing.
VM Heap controls how much memory an app inside the emulator can use, not the emulator itself.
Default value is usually fine
Increase only if you’re testing memory-heavy apps (large bitmaps, video, complex Compose UIs)
If you set it too high without a reason:
You won’t see real benefits
You may hide memory issues that show up on real devices
Practical note: If your app only runs after increasing VM Heap, that’s a signal to fix memory usage, not raise limits.
Watch for:
OutOfMemoryError
Frequent GC activity in Logcat
UI stutter caused by memory pressure
System Images: x86_64 vs ARM (Context Matters in 2026)
For most desktop environments:
x86_64 images → still the default for performance
However:
On Apple Silicon (ARM hosts), ARM images can perform better due to reduced translation overhead.
Takeaway: Choose the image based on your host architecture, not habit.
Hardware Acceleration: Non-Negotiable
Without hardware acceleration, nothing else will save you.
Windows → WHPX / Hyper-V
Linux → KVM
macOS → Hypervisor.framework
If virtualization isn’t enabled in BIOS/UEFI, performance will collapse.
GPU Rendering: Prefer Hardware, Validate When Needed
Set graphics to:
Hardware (GLES 2.0 or 3.0)
This improves:
UI responsiveness
Frame rendering
Animation smoothness
When to switch to software:
Debugging rendering issues
Investigating device-specific GPU bugs
Resolution and Device Profile
Higher resolution increases GPU load.
Practical setup:
Use 720p or 1080p for daily development
Use higher resolutions only for layout validation
Avoid treating the emulator like a flagship device unless required.
Quick Boot vs Cold Boot: Know the Trade-Off
Quick Boot is convenient, but not always safe.
Use Quick Boot when:
Iterating during development
You need faster startup
Use Cold Boot when:
Running tests
Debugging inconsistent behavior
Working in CI environments
Snapshots can introduce subtle state issues that are hard to trace.
Settings for CI/CD and Teams Environments
This is where things usually break if you’re not careful.
Jetpack Compose continues to evolve, and one of the most interesting updates is the new Ripple API. If you’ve been building modern Android UIs, you’ve probably used ripple effects to give users visual feedback when they tap on buttons, cards, or other interactive elements. That subtle wave animation plays a big role in making interactions feel responsive and intuitive.
With the latest updates, Google has refined how ripple indications work in Compose. The new approach makes ripple effects more efficient, more customizable, and better aligned with Material Design 3.
In this article, we’ll explore what changed, why these updates matter, and how you can start using the new Ripple API in Jetpack Compose in your apps.
What’s Covered in This Guide
We’ll walk through:
What ripple effects are in Jetpack Compose
Why the ripple API was updated
How the new Ripple API works
How to implement it using Kotlin
Best practices for customizing ripple behavior
By the end, you’ll have a clear understanding of how the new ripple system works and how to apply it effectively in your Compose UI.
What Is Ripple in Jetpack Compose?
Ripple is the touch feedback animation shown when a user taps or presses a UI component.
For example:
Buttons
Cards
List items
Icons
Navigation items
When the user taps an element, a circular wave spreads from the touch point.
This animation improves:
User experience
Accessibility
Visual feedback
Interaction clarity
In Material Design, ripple is the default interaction effect.
In Jetpack Compose, ripple is typically used with clickable modifiers.
Kotlin
Modifier.clickable { }
By default, this modifier automatically adds ripple feedback.
Why the Ripple API Changed
For a long time, ripple effects in Jetpack Compose were implemented through the Indication system, typically using rememberRipple(). While this approach worked well, it came with a few limitations.
Composition overhead: Since rememberRipple() was a composable function, it participated in the recomposition cycle. In some cases, this introduced unnecessary overhead for something that should ideally remain lightweight.
Memory usage: Each usage created new state objects, which could increase memory usage when ripple effects were applied across many UI components.
Tight coupling with Material themes: The implementation was closely tied to Material 2 and Material 3. This made it less flexible for developers building custom design systems or UI frameworks.
To address these issues, the ripple implementation has been redesigned using the Modifier.Node architecture. This moves ripple handling closer to the rendering layer, allowing it to be drawn more efficiently without triggering unnecessary recompositions.
As a result, the updated API makes ripple behavior:
More performant
More consistent with Material 3
Easier to customize
Better aligned with the modern Indication system
Overall, this change simplifies how ripple effects are handled while improving performance and flexibility for Compose developers.
Old Ripple Implementation (Before the Update)
Before the New Ripple API in Jetpack Compose, developers often used rememberRipple().
interactionSource → tracks user interactions (press, hover, focus)
Although this worked well, it required extra setup for customization.
The New Ripple API in Jetpack Compose
The New Ripple API in Jetpack Compose simplifies ripple creation and aligns it with Material3 design system updates.
The ripple effect is now managed through Material ripple APIs and better indication handling.
In most cases, developers no longer need to manually specify ripple.
Default Material components automatically apply ripple.
Kotlin
Button(onClick = { }) {Text("Click Me")}
This button already includes ripple.
However, when working with custom layouts, you may still need to configure ripple manually.
Key Changes from Old to New
Key changes in Compose Ripple APIs (1.7+)
rememberRipple() is deprecated. Use ripple() instead. The old API relied on the legacy Indication system, while ripple() works with the new node-based indication architecture.
RippleTheme and LocalRippleTheme are deprecated. Material components no longer read LocalRippleTheme. For customization use RippleConfiguration / LocalRippleConfiguration or implement a custom ripple.
Many components now default interactionSource to null, allowing lazy creation of MutableInteractionSource to reduce unnecessary allocations.
The indication system moved to the Modifier.Node architecture. Indication#rememberUpdatedInstance was replaced by IndicationNodeFactory for more efficient rendering.
Key Differences at a Glance:
Basic Example Using the New Ripple API
Let’s start with a simple example by creating a clickable Box with a ripple effect. This demonstrates how touch feedback appears when a user interacts with a UI element.
Before looking at the new approach, here’s how ripple was typically implemented in earlier versions of Compose.
The previous implementation relied on rememberRipple(), which has now been replaced by the updated ripple API.
Using the New Ripple API:
Here’s how you can implement the same behavior using the updated ripple system.
Kotlin
@ComposablefunRippleBox() {val interactionSource = remember { MutableInteractionSource() } // Or pass null to lazy-init Box( modifier = Modifier .size(120.dp) .background(Color.LightGray) .clickable( interactionSource = interactionSource, indication = ripple(), // From material3 or material onClick = {} ) ){Text("Tap me!") }}
In many cases you can simply pass interactionSource = null, which allows Compose to lazily create it only when needed.
Understanding the Key Components
MutableInteractionSource
Kotlin
val interactionSource = remember { MutableInteractionSource() }
MutableInteractionSource emits interaction events such as:
Press
Focus
Hover
Drag
Indications like ripple observe these events to trigger animations.
clickable modifier
Kotlin
Modifier.clickable()
This makes the composable interactive and triggers ripple on tap.
ripple()
Kotlin
indication = ripple()
ripple() is the new ripple API in Jetpack Compose and replaces the deprecated rememberRipple() implementation.
By default:
The ripple color is derived from MaterialTheme
The ripple originates from the touch point
The ripple is bounded within the component by default
Unlike the previous API, ripple() is not a composable function and works with the newer Modifier.Node-based indication system, which reduces allocations and improves performance.
When the user taps the component, the ripple will appear red instead of the default theme color.
Example: Unbounded Ripple
By default, ripple is bounded, meaning it stays inside the component.
If you want ripple to spread outside the element:
Kotlin
indication = ripple( bounded = false)
Use Cases
Unbounded ripple works well for:
floating action buttons
icon buttons
circular elements
Example: Setting Ripple Radius
You can also control ripple size.
Kotlin
indication = ripple( radius = 60.dp)
The radius defines how far the ripple spreads from the touch point.
This can help match custom UI designs.
Advanced Customization: RippleConfiguration
If you want to change the color or the alpha (transparency) of your ripples globally or for a specific part of your app, the old LocalRippleTheme is out (deprecated). Instead, we use LocalRippleConfiguration.
The modern approach uses RippleConfiguration and LocalRippleConfiguration. This allows you to customize ripple appearance for a specific component or subtree of your UI.
Example: Custom Ripple
Kotlin
val myCustomRippleConfig = RippleConfiguration( color = Color.Magenta, rippleAlpha = RippleAlpha( pressedAlpha = 0.2f, focusedAlpha = 0.2f, draggedAlpha = 0.1f, hoveredAlpha = 0.4f ))CompositionLocalProvider( LocalRippleConfiguration provides myCustomRippleConfig) {Button(onClick = { }) {Text("I have a Magenta Ripple!") }}
RippleConfiguration
A configuration object that defines the visual appearance of ripple effects.
RippleAlpha
Controls the ripple opacity for different interaction states:
pressedAlpha
focusedAlpha
draggedAlpha
hoveredAlpha
CompositionLocalProvider
Wraps a section of UI and provides a custom ripple configuration to all child components that read LocalRippleConfiguration.
With the new ripple API in Jetpack Compose, many Material components already include ripple feedback by default. This means you usually don’t need to manually specify indication = ripple().
Examples include:
Button
Card (clickable version in Material3)
ListItem
IconButton
NavigationBarItem
These components internally handle interaction feedback using the ripple system.
Kotlin
Card( onClick = { }) {Text("Hello")}
In Material3, providing onClick automatically makes the Card clickable and displays the ripple effect.
No manual ripple indication is required.
Best Practices for Using the New Ripple API in Jetpack Compose
1. Prefer Default Material Components
Material components already include ripple behavior.
This keeps UI consistent with Material Design.
2. Avoid Over-Customizing Ripple
Too much customization can create inconsistent UX.
Stick with theme defaults unless necessary.
3. Use interactionSource = null Unless You Need It
In modern Compose versions, you usually do not need to create a MutableInteractionSource manually.
interactionSource can now be null, allowing Compose to lazily create it when needed
This simplifies the code and avoids unnecessary allocations.
If you need to observe interaction events, you can still provide your own MutableInteractionSource.
Conclusion
The New Ripple API in Jetpack Compose simplifies how developers implement touch feedback while improving performance and consistency.
Key takeaways:
Ripple provides visual feedback for user interactions
The new API replaces rememberRipple() with ripple()
Material components already include ripple by default
Custom components can easily add ripple using Modifier.clickable
The updated system improves performance and flexibility
If you build modern Android apps with Jetpack Compose, understanding the New Ripple API in Jetpack Compose is essential for creating responsive and user-friendly interfaces.
AI-powered coding assistants have completely changed how Android developers write code. Features like Gemini in Android Studio read your project files, understand your codebase structure, and suggest intelligent completions. That’s incredibly helpful — but it also raises an important question:
What exactly is the AI reading?
The answer is often “more than you think.” Configuration files, API keys stored in local .properties files, internal endpoint URLs, analytics tokens — all of these can end up in the AI’s context window if you’re not careful.
That’s exactly where the .aiexclude file comes in. It’s Android Studio’s answer to the .gitignore file, but instead of telling Git what to ignore, it tells the AI assistant what files should stay completely off-limits.
In this guide, we’ll walk you through everything you need to know about the .aiexclude file — what it is, why it matters, how to create and configure it, and real-world patterns to protect your project.
What Is the .aiexclude File?
The .aiexclude file is a plain text configuration file that tells Android Studio’s AI features which files and folders it should never index, read, or use as context when generating suggestions.
Think of it like a privacy wall between your sensitive project files and the AI. When a file is listed in the .aiexclude file, it simply becomes invisible to the AI — it won’t factor into any code completions, refactoring suggestions, or AI-assisted search results.
This feature was introduced as developers started using AI assistants more deeply in their workflows and needed a simple, declarative way to control what data gets shared.
Why Does This Matter?
Here’s a realistic scenario: You’re building a fintech app. You have a local.properties file with a Stripe API key sitting in your project root. Your .gitignore already excludes it from version control. But your AI assistant doesn’t know about .gitignore — it reads every file it can find in your project.
Without a .aiexclude file, that API key could end up in the AI’s context. With one, you can ensure it’s never touched.
Where to Place the .aiexclude File
The .aiexclude file can live in two places, and the location determines its scope:
1. Project root directory — Applies rules across the entire project.
2. Inside a specific module or subdirectory — Applies rules only to that folder and its contents.
Plaintext
MyAndroidApp/├── app/│ ├── .aiexclude ← covers only the /app module│ └── src/└── secrets/ ├── .aiexclude ← covers only /secrets └── api_keys.txt
You can even have multiple .aiexclude files in the same project, one per folder, with each one managing its own exclusion rules. They all work together, so there’s no conflict — Android Studio respects all of them.
How the .aiexclude File Syntax Works
The .aiexclude file uses a simple pattern syntax, very similar to .gitignore. Let’s break it down.
Basic File Exclusion
To exclude a specific file, just write its name or path:
Plaintext
# Exclude a specific file in the same directorylocal.properties# Exclude a file using a relative pathconfig/secrets.json
The # character starts a comment — anything after # on that line is ignored by the parser.
Excluding Entire Directories
Add a trailing slash / to target a whole folder:
Plaintext
# Exclude the entire secrets foldersecrets/# Exclude a nested folderapp/src/main/assets/private/
Every file inside that folder — regardless of name or extension — becomes invisible to the AI.
Wildcard Patterns
Wildcards are your best friends here. The .aiexclude file supports standard glob patterns:
Plaintext
# Exclude all .properties files anywhere in the project**/*.properties# Exclude all JSON files in the config directoryconfig/*.json# Exclude all files that start with "key_"**/key_*# Exclude everything inside any folder named "internal"**/internal/**
The ** pattern means “match any number of directories,” so **/*.env would catch .env files no matter how deeply nested they are.
Negation with !
You can un-exclude something that was already covered by a broader rule, using !:
Plaintext
# Exclude all .properties files**/*.properties# ...but allow gradle.properties back in (it has no secrets)!gradle.properties
Just like .gitignore, order matters here — later rules override earlier ones. So always put the negation after the broader exclusion.
Creating Your First .aiexclude File
Let’s walk through setting up a .aiexclude file from scratch in a typical Android project.
Step 1: Create the File
Right-click the project root in Android Studio’s Project view, select New → File, and name it exactly:
.aiexclude
No extension. No prefix. Just .aiexclude.
Tip: If you’re on Windows and File Explorer is hiding files starting with a dot, use Android Studio’s built-in file creation — it handles this correctly.
Step 2: Add Your Exclusion Rules
Open the newly created .aiexclude file and start adding your rules. Here’s a practical starter template:
Plaintext
# ─────────────────────────────────────────────# .aiexclude — AI context exclusion rules# Keeps sensitive and irrelevant files out of# Android Studio's AI assistant context.# ─────────────────────────────────────────────# Local configuration with API keys or secretslocal.properties*.env*.env.*# Keystores and signing credentials**/*.jks**/*.keystorekeystore.properties# Service account and OAuth credential filesgoogle-services-staging.json**/credentials/**/service_account*.json# Internal analytics or experiment configs**/internal_experiments/**/ab_test_config.json# Build outputs - not useful for AI contextbuild/**/build/.gradle/# Auto-generated files (reduce AI noise)**/generated/**/*Generated.java**/*Generated.kt# Raw data or large asset files**/raw/**/*.csv**/*.sqlite**/*.db# Private documentationdocs/internal/INTERNAL_NOTES.md
Step 3: Verify It’s Working
After saving the .aiexclude file, restart Android Studio or invalidate caches (File → Invalidate Caches / Restart). The AI assistant should now skip the excluded files entirely when generating suggestions.
You can confirm this by checking whether the AI references any content from an excluded file — it shouldn’t.
Real-World Use Cases: What to Exclude and Why
Here are common scenarios where the .aiexclude file becomes genuinely essential.
Use Case 1: Protecting API Keys in local.properties
The local.properties file is the most common place Android developers store sensitive keys — Maps API keys, Firebase project IDs, payment gateway tokens. It’s excluded from Git, but not from AI by default.
Plaintext
# .aiexclude# Keep the AI away from local config with secretslocal.propertieskeystore.properties
Why this matters: If the AI reads local.properties, it might include your API key in a generated code snippet or log statement — even innocently, in a test file it suggests.
Use Case 2: Excluding Generated Code
Generated files (like Room database implementations, Hilt component files, or proto-generated classes) create a lot of noise for the AI. The AI might try to “help” by referencing or modifying them, even though they’re auto-generated and will be overwritten on the next build.
Plaintext
# .aiexclude# Auto-generated files - don't waste AI context on these**/generated/**/*_Impl.kt**/*.pb.java
Generated files can confuse the AI or cause it to suggest changes to code that isn’t meant to be manually edited. Excluding them improves suggestion quality.
Use Case 3: Excluding Proprietary Business Logic
Maybe you’re working on a module that contains proprietary algorithms or confidential business logic — something your company doesn’t want indexed anywhere outside of approved systems.
Plaintext
# .aiexclude placed inside /pricing-engine module# Protect proprietary pricing logic from AI indexingalgorithms/models/pricing/
Even if you trust the AI tool itself, having strict boundaries on what it accesses is good security hygiene — especially in regulated industries.
Use Case 4: Large Files That Hurt Performance
The AI doesn’t need to read a 50MB SQLite database file or a massive CSV dataset. Including them wastes AI context budget and can slow things down.
Plaintext
# .aiexclude# Large files that don't help the AI at all**/*.sqlite**/*.db**/*.csvassets/large_dataset.json
AI context windows have limits. Keeping them focused on actual source code means better, more relevant suggestions.
Common Mistakes to Avoid with the .aiexclude File
Even experienced developers make these slip-ups when first working with the .aiexclude file. Here’s what to watch out for.
Always use relative paths from the location of the .aiexclude file itself:
Plaintext
# Correct — relative pathlocal.properties
Mistake 2: Forgetting Subdirectories
This only excludes secrets.json at the root level:
Plaintext
# Only matches root-level filesecrets.json
If the file might exist deeper in the project:
Plaintext
# Matches the file anywhere in the project**/secrets.json
Mistake 3: Not Committing the .aiexclude File to Version Control
Unlike local.properties, the .aiexclude file itself is not sensitive — it just describes what’s sensitive. You should absolutely commit it to Git so your whole team benefits from the same exclusion rules.
Plaintext
git add .aiexcludegit commit -m "Add .aiexclude to protect sensitive files from AI context"
Mistake 4: Over-Excluding Everything
It can be tempting to exclude huge chunks of your project “just to be safe,” but that defeats the purpose of the AI assistant. If the AI can’t see your code, it can’t help you write better code.
Be selective. Exclude what’s genuinely sensitive or noisy — not everything.
The .aiexclude File vs. .gitignore: What’s the Difference?
People often ask whether these two files overlap. Here’s a clear side-by-side comparison:
They’re complementary, not replacements for each other. A file can be in .gitignore but still readable by the AI — that’s the exact problem the .aiexclude file solves.
Team Workflow: Making .aiexclude a Team Standard
If you’re leading a team, the .aiexclude file should be part of your project setup checklist — right alongside .gitignore and EditorConfig.
Here’s how to make it a team standard:
Add it to your project template. If your team uses a custom Android project template (or a cookiecutter script), bake in a sensible default .aiexclude file from day one.
Include it in your code review checklist. When a new secret, config, or sensitive module gets added to the project, verify the .aiexclude file is updated accordingly.
Document it in your README. A single line in your project’s README explaining that the project uses a .aiexclude file helps new team members understand the setup quickly.
Treat it like a security document. Additions to the .aiexclude file should go through a quick review — just like changes to SECURITY.md or secrets management configs.
Advanced Pattern: Module-Level .aiexclude Files
In larger, multi-module Android projects, it often makes more sense to manage exclusions at the module level rather than maintaining one giant .aiexclude file at the root.
# Global ruleslocal.properties**/*.jks**/*.keystore**/build/
feature-payments/.aiexclude:
Plaintext
# Extra-strict for this module — payment logic is proprietarysrc/
This hierarchical approach gives you fine-grained control without cluttering a single file.
Frequently Asked Questions About the .aiexclude File
Q: Does the .aiexclude file affect code completion outside of AI features?
No. The .aiexclude file only affects the AI assistant. Standard IntelliJ code completion, navigation, and refactoring tools are not impacted.
Q: Can I use the .aiexclude file in other JetBrains IDEs?
The .aiexclude file was introduced in the context of Android Studio’s AI integration. Support in other JetBrains IDEs may vary — check the documentation for the specific IDE.
Q: What happens if I have conflicting rules between two .aiexclude files?
Each .aiexclude file applies to its own directory and below. There’s no true “conflict” — rules from parent and child directories stack together. The most specific rule (closest to the file) generally wins, similar to .gitignore behavior.
Q: Will the .aiexclude file protect me from ALL data leakage?
The .aiexclude file is a strong first line of defense for local AI features in Android Studio. However, it does not control what happens when you use external AI tools, paste code into chat interfaces, or use other plugins. Treat it as one layer of a broader security practice.
Q: Should I exclude google-services.json?
It depends. The google-services.json that goes into your app usually contains project IDs and API keys. While it’s not as sensitive as a private key, it’s worth excluding it from AI context — especially the production variant. You might do this:
Copy this into any Android project and customize as needed:
Plaintext
# ─────────────────────────────────────# .aiexclude — Recommended Default# Android Studio AI Context Exclusions# ─────────────────────────────────────# Secrets and local configlocal.propertieskeystore.properties*.env*.env.*.env.local# Signing keystores**/*.jks**/*.keystore# Firebase and Google service filesgoogle-services.jsonGoogleService-Info.plist# Service accounts and credentials**/credentials/**/service_account*.json# Build artifacts**/build/.gradle/**/.gradle/# Auto-generated code**/generated/**/*Generated.java**/*Generated.kt**/*_Impl.kt# Large binary or data assets**/*.sqlite**/*.db**/*.csv**/*.parquet# Internal documentationdocs/internal/INTERNAL*.mdCONFIDENTIAL*# IDE-specific artifacts.idea/workspace.xml.idea/tasks.xm
Conclusion
The .aiexclude file is a small file with a big impact. In just a few lines, it lets you control exactly what your AI assistant sees — keeping sensitive keys, proprietary logic, and noisy generated files out of its context while letting it focus on the code that actually matters.
Here’s a quick recap of what we covered:
The .aiexclude file acts like a privacy filter between your project and the AI assistant in Android Studio.
Place it in your project root for global rules, or in subdirectories for module-level control.
It uses glob-style patterns very similar to .gitignore.
Always commit it to version control so your whole team benefits.
Combine it with other security practices — it’s one layer, not a complete solution.
If you haven’t added a .aiexclude file to your Android project yet, now’s the time. Open Android Studio, create the file, drop in the template above, and customize it for your project’s needs.
It takes five minutes and pays dividends in security, performance, and peace of mind.