By Eshita Bhargava
Copyright timesnownews
She uploads a selfie, types a magic prompt (“old Bollywood saree, vintage backdrop, cinematic lighting”), presses generate—and voilà, out comes an image of her in a flowing saree, looking every bit like a poster from a 90s film. The fabrics drape just right, the colours wash like film stock, everything seems perfect and dreamy. Yet when she looks more closely, there it is: A little mole on her arm, in a spot where she never saw one in the original photo. That moment of “Wait, how did it know that?” has shaken many who have tried Google Gemini’s Nano Banana AI, especially in its viral “saree edit” incarnation. The trend is joyful, aesthetic, nostalgic—and also, creeping under the surface, unnervingly intimate. The Trend, in Full Bloom Nano Banana is the new image-editing model in Google’s Gemini app (aka Gemini 2.5 Flash Image), and very quickly it has become a household name on Instagram, TikTok, etc. Users started with 3D figurines, toy-like styled faces, pets, etc., and then someone had the spark: why not sarees? The saree transformation version tends to take a user’s selfie and turn them into retro-Bollywood-style portraits, often with vintage backdrops, golden hues, and the works. Millions are having fun. Filters and edits are shared, trends are picked up, and people marvel at how realistic or artistic the edits look. Nano Banana’s promise is strong: You can change parts of your image while keeping many other details constant—your face, skin tone, structure, lighting, etc.—so the edits don’t feel like cheap stickers pasted on. The “Creepy” Moment Enter Jhalakbhawani (Instagram user), who hopped on the trend, expecting something glamorous. She found something else: A mole in the AI-generated image, in a place where, in her original photo, there was no visible mole. And she was spooked. “How Gemini knows I have mole in this part of my body? This is very scary, very creepy…” she said in a video, urging others to be more cautious about what they upload to AI platforms. Her experience wasn’t unique in reactions. On the same post, many others commented, some said, “Happened to me too,” others asserted that Google “must have access to far more data than the single pic we upload,” or that the AI “is stitching together leaks from other pics or metadata or something.” Some were amused, some unnerved, some angry. It’s the kind of reaction that mixes disbelief and suspicion. What Experts Say: Not Just Fun, But Risky Tech journalists and privacy experts have been pointing out exactly what is disquieting: The moment you upload a facial image—or any image, really—to an AI tool, you are giving away more than you may realise. The tool could retain that image, analyse details (facial structure, skin marks, jewellery, lighting, background), and use those to train models, build profiles, or even combine with other images. Even if Google (or another provider) claims that images are stored securely and that there are watermarks (invisible watermarks like SynthID) and metadata tags to identify AI-generated content, users often can’t verify or control how long their images stay, or what other derived data is extracted along the way. There are also concerns of deepfake misuse: once an image looks extremely realistic, bad actors could use AI-edited or AI-generated images to impersonate someone, create misleading content or scams, or violate someone’s consent. Another issue: privacy laws vary widely; what’s acceptable in one jurisdiction may not be in another. Experts warn about metadata leaks (time, location, device), facial recognition, identity theft, etc. Voices from the People Who Used Nano Banana Brushing aside the theoretical, actual users have offered mixed opinions: One user, after trying Nano Banana for editing portraits or combining scenes, said that while the results are fun and often beautiful, there are quirks: sometimes AI over-smooths the skin, changes features they didn’t ask to be touched, or introduces artefacts—“gave me sunburn” or “trimmed my beard” when they didn’t want that. Others expressed gratitude at how improved the tool is over previous image-editing AI: editing small details without distorting everything else, smoother outputs, more control. But many also say the more control doesn’t fully protect your identity or privacy. Users who saw that mole detail tend to believe the tool had somehow “seen” that detail in some earlier photo, or that their other social media photos had revealed things (via lighting or angles) that the AI had learned from. Some are angry, feeling their privacy was invaded. Some shrug and decide “ah well, I won’t upload anything personal anymore.” Is It Safe, or Just Creepy (Or Both)? We tread here in a grey zone. On one hand, there is nothing inherently evil about the Nano Banana saree trend. It is creative, playful, and allows people to explore new forms of self-expression. It’s visually striking, gives a nostalgic feel, and for many people, just plain fun. The convenience and the aesthetic magic are real draws. On the other hand, the “creepy” comes in where the AI appears to “know” things it wasn’t explicitly shown—or where users feel exposed. The fact that the AI can recreate or invent small identity markers (like moles or skin marks) which weren’t visible in the original photo suggests that the model is leveraging either prior data, possibly images already uploaded or made public, or metadata, or is simply making an educated guess that happens to match. That “match” feels unsettling, because it blurs what is private and what is given to the AI. Therefore, while “safe” depends on your threshold of risk, many of the ingredients make caution justified: The possibility of data persistence: once images are uploaded, you rarely know how long they are stored, who has access, or how they will be used. The possibility of combining data: AI models may draw on large image datasets, some scraped from public profiles, public repositories, possibly personal photos you posted earlier. The possibility of misuse: with convincing images, deepfake or identity impersonation becomes more feasible, which might be used for harassment, scams, or social engineering. What to Always Be Careful About Because of all this, there are things you should always think through before uploading images to AI platforms—especially when viral trends or public sharing are involved. Imagine your face, your marks, your pose, and your surroundings all being clues. Many times we are careless, posting selfies everywhere, often with metadata (geolocation, timestamps), perhaps even other images of us unguarded. AI could link those in ways we do not expect. Here are some of the sneaky ways privacy may leak: Hidden metadata in photos (location, device details). Facial recognition matching with other images online. Other photos, posts, older uploads creating a “database” that models may have trained on, such that your image-style, your skin marks, even your habits in photos are baked into the model. Using fake or “clone” apps that look like the real Gemini/Nano Banana but actually steal user data or images. The pressure or default sharing: once you upload and generate, many people share the final images publicly, which further spreads derived data. How to Enjoy AI Creativity Without Losing Sleep I’m not trying to say you should never use Nano Banana or similar tools. On the contrary: these trends are fun, the visuals are tempting, and there’s something magical in seeing yourself transformed. But like walking on a beautiful glass floor—you love how it looks, but you don’t want to lean so far you slip. Here are some gentle tips: Use images where you feel okay sharing identity markers and background; avoid things too revealing (real location, private home interiors). Remove metadata (location, camera details) before upload, if your device lets you. Check the app source—use the official Gemini app/site, avoid third-party or clone/trick apps. Read the privacy policy, especially clauses about “training models”, “data retention”, “sharing with third parties.” Limit how many personal photos face, torso etc you upload. Maybe try with an image where your face is partially obscured. Be mindful of what you share afterwards; even sharing just one cute saree edit can propagate data. It’s a Mirror of Our Choices The Nano Banana saree trend is like a mirror: showing us what we love about image, beauty, creative remixing and also what we might not have thought through—how much of ourselves (literal details, identity clues) we are giving away. The “creepy” moments—like a mole appearing, or details that seem too specific—make visible the invisible infrastructure behind our online selves. These tools don’t have to be scary. They can offer joy, aesthetic pleasure, even artful self-expression. But the safety lies in awareness: of what the tools can see, retain, combine; of what we post elsewhere; of how policies and law interact with our rights; and of how once something is uploaded, it often cannot be fully “unsent.” So, is Nano Banana saree trend safe or creepy? It’s a bit of both. It leans joyous and creative, but it carries real potential for overreach. And every time we use one of these trendy tools, we get to decide: am I okay with what the tool may remember, infer, or reveal? Always better to decide consciously—but enjoyably—rather than wake up one day with a mole in an unexpected place.