Technology News

Making technology work for business
  1. We’re all talking to fake people now, but most people don’t realize that interacting with AI is a subtle and powerful skill that can and should be learned.

    The first step in developing this skill set is to acknowledge to yourself what kind of AI you’re talking to and why you’re talking to it.

    AI voice interfaces are powerful because our brains are hardwired for human speech. Even babies’ brains are tuned to voices before they can talk, picking up language patterns early on. This built-in conversational skill helped our ancestors survive and connect, making language one of our most essential and deeply rooted abilities.

    But that doesn’t mean we can’t think more clearly about how to talk when we speak to AI. After all, we already speak differently to other people in different situations. For example, we talk one way to our colleagues at work and a different way to our spouses.

    Yet people still talk to AI like it’s a person, which it’s not; like it can understand, which it cannot; and like it has feelings, pride, or the ability to take offense, which it doesn’t.

    The two main categories of talking AI

    It’s helpful to break the world of talking AI (both spoken and written) into two categories:

    1. Fantasy role playing, which we use for entertainment.
    2. Tools, which we use for some productive end, either to learn information or to get a service to do something useful for us.

    Let’s start with role-playing AI.

    AI for pretending

    You may have heard of a site and app called Status AI, which is often described as a social network where everyone else on the network is an AI agent.

    A better way to think about it is that it’s a fantasy role-playing game in which the user can pretend to be a popular online influencer.

    Status AI is a virtual world that simulates social media platforms. Launched as a digital playground, it lets people create online personas and join fan communities built around shared interests. It “feels” like a social network, but every interaction—likes, replies, even heated debates—comes from artificial intelligence programmed to act like real users, celebrities, or fictional characters.

    It’s a place to experiment, see how it feels to be someone else, and interact with digital versions of celebrities in ways that aren’t possible on real social media. The feedback is instant, the engagement is constant, and the experience, though fake, is basically a game rather than a social network.

    Another basket of role-playing AI comes from Meta, which has launched AI-powered accounts on Facebook, Instagram, and WhatsApp that let users interact with digital personas — some based on real celebrities like Tom Brady and Paris Hilton, others entirely fictional. These AI accounts are clearly labeled as such, but (thanks to AI) can chat, post, and respond like real people. Meta also offers tools for influencers to use AI agents to reply to fans and manage posts, mimicking their style. These features are live in the US, with plans to expand, and are part of Meta’s push to automate and personalize social media.

    Because these tools aim to provide make-believe engagements, it’s reasonable for users to pretend like they’re interacting with real people.

    These Meta tools attempt to cash in on the wider and older phenomenon of virtual online influencers. These are digital characters created by companies or artists, but they have social media accounts and appear to post just like any influencer. The best-known example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017 by British photographer Cameron-James Wilson, presented as the world’s first digital supermodel. These characters often partner with big brands.

    A post by one of the major virtual influencer accounts can get hundreds or thousands of likes and comments. The content of these comments ranges from admiration for their style and beauty to debates about their digital nature. Presumably, many people think they’re commenting to real people, but most probably engage with a role-playing mindset.

    By 2023, there were hundreds of these virtual influencers worldwide, including Imma from Japan and Noonoouri from Germany. They’re especially popular in fashion and beauty, but some, like FN Meka, have even released music. The trend is growing fast, with the global virtual influencer market estimated at over $4 billion by 2024.

    AI for knowledge and productivity

    We’re all familiar with LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and Perplexity.

    The public may be even more familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and Cortana, which have been around much longer.

    I’ve noticed that most people make two general mistakes when interacting with these chatbots or assistants.

    The first is that they interact with them as if they’re people (or role-playing bots). And the second is that they don’t use special tactics to get better answers.

    People often treat AI chatbots like humans, adding “please,” “thank you,” and even apologies. But the AI doesn’t care, remember, and is not significantly affected by these niceties. Some people even say “hi” or “how are you?” before asking their real questions. They also sometimes ask for permission, like “Can you tell me…” or “Would you mind…” which adds no value. Some even sign off with “goodbye” or “thanks for your help,” but the AI doesn’t notice or care.

    Politeness to AI wastes time — and money! A year ago, Wharton professor Ethan Mollick pointed out that people using “please” and “thank you” in AI prompts add extra tokens, which increases the compute power needed by the LLM chatbot companies. This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman replied to another user on X, saying (perhaps exaggerating) that polite words in prompts have cost OpenAI “tens of millions of dollars.”

    “But wait a second, Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you better results.” And that’s true — sort of. Several studies and user experiments have found that AI chatbots can give more helpful, detailed answers when users phrase requests politely or add “please” and “thank you.” This happens because the AI models, trained on vast amounts of human conversation, tend to interpret polite language as a cue for more thoughtful responses.

    But prompt engineering experts say that clear, specific prompts — such as giving context or stating exactly what you want — consistently produce much better results than politeness.

    In other words, politeness is a tactic for people who aren’t very good at prompting AI chatbots.

    The best way to get top-quality answers from AI chatbots is to be specific and direct in your request. Always say exactly what you want, using clear details and context.

    Another powerful tactic is something called “role prompting” — tell the chatbot to act as a world-class expert, such as, “You are a leading cybersecurity analyst,” before asking a question about cybersecurity. This method, proven in studies like Sander Schulhoff’s 2025 review of over 1,500 prompt engineering papers, leads to more accurate and relevant answers because it tells the chatbot to favor content in the training data produced by experts, rather than just lumping the expert opinion in with the uneducated viewpoints.

    Also: Give background if it matters, like the audience or purpose.

    (And don’t forget to fact-check responses. AI chatbots often lie and hallucinate.)

    It’s time to up your AI chatbot game. Unless you’re into using AI for fantasy role playing, stop being polite. Instead, use prompt engineering best practices for better results.





























































































    >We’re all talking to fake
    people now, but most people don’t realize that interacting with AI is a subtle
    and powerful skill that can and should be learned.The first step in developing
    this skill set is to acknowledge to yourself what kind of AI you’re talking to
    and why you’re talking to it. AI voice interfaces are
    powerful because our brains are hardwired for human speech. Even babies’ brains
    are tuned to voices before they can talk, picking up language patterns early
    on. This built-in conversational skill helped our ancestors survive and connect,
    making language one of our most essential and deeply rooted abilities.But that doesn’t mean we can’t
    think more clearly about how to talk when we speak to AI. After all, we already
    speak differently to other people in different situations. For example, we talk
    one way to our colleagues at work and a different way to our spouses. Yet people still talk to AI
    like it’s a person, which it’s not; like it can understand, which it cannot;
    and like it has feelings, pride, or the ability to take offense, which it
    doesn’t. The two main categories of
    talking AIIt’s helpful to break the
    world of talking AI (both spoken and written) into two categories: >1.    
    Fantasy role playing, which we use for
    entertainment. >2.    
    Tools, which we use for some productive end,
    either to learn information or to get a service to do something useful for us. Let’s start with role-playing
    AI.

    id="ai-for-pretending">AI for pretending>You may have heard of a site
    and app called Status AI, which is often
    described as a social network where everyone else on the network is an AI
    agent. A better way to think about it
    is that it’s a fantasy role-playing game in which the user can pretend to be a
    popular online >influencer.Status AI is a virtual world
    that simulates social media platforms. Launched as a digital playground, it
    lets people create online personas and join fan communities built around shared
    interests. It “feels” like a social network, but every interaction—likes,
    replies, even heated debates—comes from artificial intelligence programmed to
    act like real users, celebrities, or fictional characters.It’s a place to experiment,
    see how it feels to be someone else, and interact with digital versions of
    celebrities in ways that aren’t possible on real social media. The feedback is
    instant, the engagement is constant, and the experience, though fake, is
    basically a game rather than a social network. Another basket of role-playing
    AI comes from Meta, which has launched
    AI-powered accounts
    on Facebook, Instagram, and WhatsApp that let users
    interact with digital personas — some based on real celebrities like Tom Brady
    and Paris Hilton, others entirely fictional. These AI accounts are clearly
    labeled as such, but (thanks to AI) can chat, post, and respond like real
    people. Meta also offers tools for influencers to use AI agents to reply to
    fans and manage posts, mimicking their style. These features are live in the
    US, with plans to expand, and are part of Meta’s push to automate and
    personalize social media.Because these tools aim to
    provide make-believe engagements, it’s reasonable for users to pretend like
    they’re interacting with real people. These Meta tools attempt to
    cash in on the wider and older phenomenon of virtual online influencers. These
    are digital characters created by companies or artists, but they have social
    media accounts and appear to post just like any influencer. The best-known
    example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which
    has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017
    by British photographer Cameron-James Wilson, presented as the world’s first
    digital supermodel. These characters often partner with big brands. A post by one of the major
    virtual influencer accounts can get hundreds or thousands of likes and
    comments. The content of these comments ranges from admiration for their style
    and beauty to debates about their digital nature. Presumably, many people think
    they’re commenting to real people, but most probably engage with a role-playing
    mindset. By 2023, there were hundreds
    of these virtual influencers worldwide, including Imma from Japan and Noonoouri
    from Germany. They’re especially popular in fashion and beauty, but some, like
    FN Meka, have even released music. The trend is growing fast, with the global
    virtual influencer market estimated at over $4 billion by 2024.

    id="ai-for-knowledge-and-productivity">AI for knowledge and productivity>We’re all familiar with
    LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and
    Perplexity. The public may be even more
    familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and
    Cortana, which have been around much longer.I’ve noticed that most people
    make two general mistakes when interacting with these chatbots or assistants.The first is that they
    interact with them as if they’re people (or role-playing bots). And the second
    is that they don’t use special tactics to get better answers. People often treat AI chatbots
    like humans, adding “please,” “thank you,” and even
    apologies. But the AI doesn’t care, remember, and is not significantly affected
    by these niceties. Some people even say “hi” or “how are
    you?” before asking their real questions. They also sometimes ask for
    permission, like “Can you tell me…” or “Would you mind…”
    which adds no value. Some even sign off with “goodbye” or
    “thanks for your help,” but the AI doesn’t notice or care. Politeness to AI wastes time —
    and money! A year ago, Wharton professor Ethan Mollick pointed out that people
    using “please” and “thank you” in AI prompts add extra
    tokens, which increases the compute power needed by the LLM chatbot companies.
    This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman
    replied to another
    user on X
    , confirming that polite words in prompts have cost OpenAI “tens of millions of
    dollars.”
    “But wait a second,
    Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you
    better results.” And that’s true — sort of. Several studies and user
    experiments have found that AI chatbots can give more helpful, detailed answers
    when users phrase requests politely or add “please” and “thank
    you.” This happens because the AI models, trained on vast amounts of human
    conversation, tend to interpret polite language as a cue for more thoughtful
    responses.But prompt engineering experts
    say that clear, specific prompts — such as giving context or stating exactly
    what you want — consistently produce much better results than politeness. In other words, politeness is
    a tactic for people who aren’t very good at prompting AI chatbots. The best way to get
    top-quality answers from AI chatbots is to be specific and direct in your
    request. Always say exactly what you want, using clear details and context. Another powerful tactic is
    something called “role prompting” — tell the chatbot to act as a
    world-class expert, such as, “You are a leading cybersecurity
    analyst,” before asking a question about cybersecurity. This method,
    proven in studies like Sander
    Schulhoff’s 2025 review
    of over 1,500 prompt engineering papers, leads to
    more accurate and relevant answers because it tells the chatbot to favor
    content in the training data produced by experts, rather than just lumping the
    expert opinion in with the uneducated viewpoints. Also: Give background if it
    matters, like the audience or purpose. (And don’t forget to
    fact-check responses. AI chatbots often lie and hallucinate.)Its time to up your AI chatbot game.
    Unless you’re into using AI for fantasy role playing, stop being polite.
    Instead, use prompt engineering best practices for better results.

  2. As AI integration accelerates, businesses are facing widening skills gaps that traditional employment models struggle to address, and so more companies are choosing to hire freelancers to fill the void, according to a new report.

    The report, from freelance work platform Upwork, claims there’s a major workforce shift as well, with 28% of skilled knowledge workers now freelancing for greater “autonomy and purpose.” An additional 36% of full-time employees are considering switching to freelance, while only 10% of freelancers want to return to traditional jobs, according to the report, which is based on a recent survey of 3,000 skilled, US-based knowledge workers.

    Gen Z is leading the shift, as 53% of skilled Gen Z professionals already freelance, and they’re expected to make up 30% of the freelance US workforce by 2030.

    “The traditional 9-to-5 model is rapidly losing its grip as skilled talent chooses flexibility, financial control, and meaningful work over outdated corporate structures,” Kelly Monahan, managing director of the Upwork Research Institute, said in a statement. “Companies that cling to old hiring and workforce models risk falling behind.”

    US businesses have ramped up their freelance hiring by 260% between 2022 and 2024, according to a report from Mellow.io, an HR platform that manages and pays freelance contractors.

    Increasingly, US businesses have turned to freelancers overseas — most frequently Eastern Europe — to fill their tech talent void, particularly for web developers, programmers, data analysts, and web designers, according to Mellow.io.

    “This trend shows no signs of slowing,” Mellow.io’s report stated. “The region offers an unparalleled balance of cost efficiency and highly skilled talent.”

    The US is a freelancer haven

    Gig workers, earning through short-term, flexible jobs via apps or platforms, are also thriving, according to career site JobLeads.

    JobLeads analyzed data from the Online Labour Observatory and the World Bank Group to reveal the countries dominating online gig work. The United States is leading in the number of online freelancers with 28% of the global online freelance market.

    Software and tech roles dominate in the US, representing 36.4% of freelancers, followed by creative/multimedia (21.1%) and clerical/data entry jobs (18.2%).

    Globally, Spain and Mexico rank second and third in freelancer share, with 7.0% and 4.6%, respectively. Among full-time online gig workers, 52% have a high school diploma, while 20% hold a bachelor’s degree, according to JobLeads.

    “The gig economy is booming worldwide, with the number of gig workers expected to rise by over 30 million in the next year alone,” said Martin Schmidt, JobLead’s managing director. This rapid growth reflects a fundamental shift in how people approach work — flexibility and autonomy are no longer just perks but non-negotiables for today’s workforce.”

    Gen Z and younger professionals are embracing gig work for its flexibility and control, while businesses gain access to a global pool of skilled freelancers, Schmidt said.

    “As the sector continues to evolve, both workers and employers need to adapt to a new reality where traditional employment models may no longer meet the needs and expectations of the modern workforce,” he said.

    Confidence in freelancing is high, with 84% of freelancers and 77% of full-time workers viewing its future as bright, according to Upwork’s report. Freelancers are also seeing more opportunities, with 82% reporting more work than last year, compared to 63% of full-time employees, the report said.

    Freelance workers generated $1.5 trillion in earnings in 2024 alone, Upwork said. The trend is gaining momentum, particularly among Gen Z, with many full-time employees eyeing independent work, according to Upwork.

    Freelancers are leading in AI, software, and sustainability jobs, demonstrating higher adaptability and continuous learning, according to the report, which focused exclusively on skilled knowledge workers, not gig workers. It also included “moonlighters,” or workers who have full-time employment but freelance on the side.

    More than half (54%) of freelancers report advanced AI proficiency compared to 38% of full-time employees, and 29% have extensive experience building, training, and fine-tuning machine learning models (vs. 18% of full-time employees), the report stated.

    Those who earn exclusively through freelance work report a median income of $85,000, surpassing their full-time employee counterparts at $80,000, the report stated.

    Upwork’s Future Workforce Index is the company’s first such report, and so it said it is unable to provide freelance employment numbers from previous years that would indicate a rising or falling trend.

    “However, what we can confidently say, based on multiple studies conducted by the Upwork Research Institute over the past several years, is that freelancing isn’t a passing trend,” an Upwork spokesperson said. “It continues to hold steady and accelerate, emerging as a vital and intentional component of the skilled workforce.”

    A silver tsunami

    Emily Rose McRae, a Gartner Resarch senior director analyst, said she’s seeing growing interest in freelancing from professionals who desire more flexible work, oftentimes as a safety net for people who lost their job during economic turmoil “and also as a way to build up a network of clients when starting a new business or looking to expand your small business.”

    Organizations are also facing an impending “silver tsunami” of older workers retiring and leaving a talent gap in their wake.

    “Many clients I speak with on this topic are trying to identify the best strategy for addressing this expertise gap, whether it is upskilling more junior employees, bringing retired experts back to serve as freelance mentors or coaches, contracting out critical projects to experts on a freelance market, or even redesigning roles and workflows to reduce the amount of expertise needed,” she said.

    “Being able to bring past employees back as freelancers can be critical for knowledge management and training,” McRae said. “This is especially critical when increasingly AI tools are being deployed on the basic and repetitive tasks that were previously the training ground used for employees to create a pipeline of future experts within the employee base,” McRae said.

    Despite the occurrence of layoffs — and sometimes because of them — organizations often face skills gaps exacerbated by the rise of AI, according to Forrester Research.

    Skills intelligence tools, often powered by AI, can help organizations identify and manage the skills and gaps in their workforce and predict future skill needs, including recommending needed recruiting, upskilling and reskilling, and talent mobility. Companies must also be able to scale up or down rapidly in on-demand talent markets, which include contractors, freelancers, gig workers, and service providers. On-demand talent increases the adaptability of your workforce but works best for non-core functions and for specialized skills that are needed for a limited period.

    Companies, however, can’t simply replace employees with freelancers without facing significant risks, McRae noted. Freelancers are best used for defined projects with clear deliverables. Using them to do the same work as former employees, without changing the role or workflow, can lead to legal and operational issues, she said. As reliance on non-employees grows, so do risks like worker misclassification, dual employment, compliance problems, and costly mistakes such as rehiring underperforming contractors or overpaying for services.

    “I’ll see this at organizations that instituted hiring freezes, so business leaders turned to contractors to continue to be able to meet their goals,” she said. “It can also create financial risks — when there isn’t much transparency or data collection going on, organizations may find that they are paying the same contractor service provider or freelancer different rates in different departments, for the same set of tasks.”

    There’s also a risk that third-party contractors are not vetting temp workers, who may not meet the necessary certifications and trainings to comply with local or national regulations, McRae added.

    “Or that contractors and freelancers have not been fully offboarded after completing their assignments and still retain access to the organization’s systems and data,” she said.

  3. Psst: Come close. Your Android phone has a little-known superpower — a futuristic system for bridging the physical world around you and the digital universe on your device. It’s one of Google’s best-kept secrets. And it can save you tons of time and effort.

    Oh — and no, it isn’t Gemini.

    It’s a little somethin’ called Google Lens, and it’s been lurking around on Android and quietly getting more and more capable for years — since long before “AI” became part of our popular vernacular. Google doesn’t make a big deal about it, weirdly enough, and you really have to go out of your way to even realize it exists. But once you uncover it, you’ll feel like you have a magic wand in your pocket.

    At its core, Google Lens is best described as a search engine for the real world. It uses (yes…) artificial intelligenceto identify text and objects both within images and in a live view from your phone’s camera, and it then lets you learn about and interact with those elements in all sorts of interesting ways.

    But while Lens’s ability to, say, identify a flower, look up a book, or give you info about a landmark is certainly impressive, it’s the system’s more mundane-seeming productivity powers that are far more likely to find a place in your day-to-day life.

    So grab your nearest Android gadget, go install the Google Lens app, if you haven’t already — or take your pick from any of the other smart Google-Lens-launching shortcuts — and get ready to teach your phone some spectacularly useful new tricks.

    [Hey — love shortcuts? My freeAndroid Shortcut Supercourse will teach you tons of time-saving tricks for your phone.Sign up now and start learning!]

    Google Lens trick #1: Dive deep into your screen

    >In a mildly wild twist, the first and newest Google Lens goody in our list is also the oldest and most familiar one of all — at least, if you’ve paying attention in this arena for long.

    >It’s a snazzy new feature that lets you indirectly have Lens analyze whatever’s on your screen and then give you helpful extra context around it.

    >This one can actually be accessed viaGoogle’s next-gen Gemini virtual assistant. Just summon Gemini, using the “Hey Google” hotword or any other method you like, then look for the tappable “Ask about screen” button within its overlay interface.

    Google Lens Android: Ask about screen
    The “Ask about screen” button within Gemini is a hidden way to access a powerful Lens feature.

    JR Raphael, Foundry

    While the answer is wrapped in Gemini, the technology powering it is the same stuff that’s been present within Lens for ages. And it’s every bit as impressive.

    Google Lens Android: Ask about screen results
    Answers, on demand — from anywhere on Android.

    JR Raphael, Foundry

    >And if you’re feeling a pesky sense of déjà vu around this, well, you should be: Google first announced this latest iteration of the on-demand screen searching system more than two years ago, for the previous-gen Google Assistant system. Prior to that point, Assistant had briefly offered a similar sort of setup>without> Lens’s involvement. And prior to that, Google had a spectacularly useful native Android feature called Now on Tap, way back in 2015’s Android 6.0 (Marshmallow) era — though amusingly, we haven’t>quite> caught back up to that level of search intelligence just yet.

    >Hey, what can we say? It’s the Google way.

    Google Lens trick #2: Copy text from the real world

    From the virtual world to the physical world around us, Google Lens’s most potent power and the one I rely on most frequently is its ability to grab text from a physical document — a paper, a book, a whiteboard, a suspiciously wordy tattoo on your rumpus, or anything else with writing on it — and then copy that text onto your phone’s clipboard. From there, you can easily paste the text into a Google Doc, a note, an email, a Slack chat, or anywhere else imaginable.

    To do that, just open up Google Lens, point your device’s camera at any text around you, then tap the big circular search icon — and you’ll be able to use your finger to select the exact portion of text you want as if it were regular ol’ digital text on a website.

    All that’s left is to hit the “Copy” command in the pop-up alongside it, and every last word will be on your system clipboard and ready to paste wherever your thumpy little heart desires.

    Google Lens Android: Copy text
    You can copy text from anywhere — virtual or physical — with a little help from Lens.

    JR Raphael, Foundry

    Google Lens trick #3: Connect text to your computer

    Let’s face it: Most of us aren’t working onlyfrom our Android phones. If you need to get some real-world text onto your computer, Lens can handle that for you, too.

    Just go through the same steps we did a second ago, but this time, look for the “Copy to computer” option in the same pop-up menu. (You might have to tap a three-dot icon within that pop-up to reveal it.) As long as you’re actively signed into Chrome with the same Google account on a computer — any computer, whether it’s Windows, Mac, Linux, or ChromeOS — that option should appear. And when you tap it, you’ll get a list of all available destinations.

    Google Lens Android: Send text to computer
    Get any text onto your computer’s clipboard in an instant with Lens’s clever copying commands.

    JR Raphael, Foundry

    Pick the device you want, and just like magic, the text from the physical document will be on that computer’s clipboard — ready and waiting to be pasted wherever you want it. Hit Ctrl-V (or Cmd-V, on a Mac), and shazam! It’ll pop into any text field, in any app or process where pasting is supported.

    Google Lens trick #4: Hear anything out loud

    Maybe you’ve just been handed a long memo, a printed-out brief of some sort, or a letter from your dear Aunt Sally. Whatever it is, give your eyes a breather and let Lens read it for you while you’re on the go and between meetings.

    Just point your phone at the paper, exactly as we did before, and select some specific text within the image once more. This time, look for the little “Listen” option in the pop-up panel atop your image.

    Pound your pinky down on that bad boy, and the Google Lens app will actually read the selected text out loud to you, in a soothingly pleasant voice.

    Hey, Google: How ’bout a nap-time story while we’re at it?!

    Google Lens trick #5: Ask, ask, ask away

    In a move that now seems foreshadowing, Lens has the completely concealed ability to let you chat out loud and ask anything imaginable about whatever your device’s camera is showing.

    All you’ve gotta do to try it is open up Lens, aim your camera at something, and then press and hold Lens’s big search button. Then, you can simply speak aloud and ask anything on your mind in a completely natural, conversational way.

    Google Lens Android: Talk
    Pressing and holding the Lens search button lets you talk and ask questions in a completely natural way.

    JR Raphael, Foundry

    Google’s official introduction of the feature involved asking questions about why some sort of product isn’t working as expecting or how you can fix some common real-world maintenance issue, but it can be every bit as helpful for practically any purpose — anytime you find yourself facing a question about something in front of you.

    This one, for now, seems to be available only within the U.S. and in English.

    Google Lens trick #6: Interact with text from an image

    In addition to the live stuff, Lens can pull and process text from images — including both actual photos you’ve taken and screenshots you’ve captured.

    That latter part opens up some pretty interesting possibilities. Say, for instance, you’ve just gotten an email with a tracking number in it, but the tracking number is some funky type of text that annoyingly can’t be copied. (This seems to happen to me way too often.) Or maybe you’re looking at a web page or presentation where the text for some reason isn’t selectable.

    Well, grab a screenshot — by pressing your phone’s power and volume-down buttons together — then make your way over to the Google Lens app. Tap the image icon in Lens’s lower-left corner, look for your screenshot in the gallery that appears, then tap it. And from there, you can simply touch your finger anywhere on the screen to select any text you want. (The same capability is also now present in the newer Android Circle to Search system, by the by, though that feature is still much more limited in its availability.)

    You can then copy the text, send it to a computer, or perform any of Lens’s other boundary-defying tricks. Speaking of which…

    Google Lens trick #7: Search for any text, anywhere

    After you’ve selected any manner of text from within the Google Lens app, look for the “Search” option within the pop-up panel that appears atop it. It’s all too easy to overlook, but alongside the other options we’ve gone over sits the simple and supremely useful “Search.”

    Keep that option in mind as a super-easy way to get info on text from any physical document or captured image without having to manually peck in the words on your own. (Sometimes, Lens will even put related results right within a panel beneath your image, without any additional searching required.)

    And on a related note…

    Google Lens trick #8: Search for similar visuals

    We already know that Lens can search for the textfrom an image. But the app is also capable of searching the webfor other images — images that match the actual objects within whatever photo or screenshot you’re viewing. It’s a fantastic way to find visually similar images or even identify something like a specific phone model or product seen within a photo.

    To pull off this slice of Googley sorcery, open up an image of anything within Lens — or just point your phone at an object in the real world — then swipe up on the panel that appears beneath it and look for the “Visual matches” section.

    Google Lens Android: Similar images
    Searching for similar visuals is a smart way to get extra context about anything around you.

    JR Raphael, Foundry

    Google Lens trick #9: Save someone’s contact info

    If you find yourself holding a business card and thinking, “Well, blimey, I sure as heckfire don’t want to type all of this into my contacts app,” first, congratulate yourself on the excellent use of blimey — and then sit your beautiful person-shell back and let Lens handle the heavy lifting for you.

    Open Lens, point your phone’s camera at the card, and tap on the person’s phone number or email address. The Google Lens app should recognize the nature of the info and prompt you to add a contact.

    One more tap, and the deed is done.

    Google Lens trick #10: Email, call, text, or navigate

    Got an address or number you need to get onto your phone for a specific sort of action? It could be on a business card, on a letter, or even on the front of a random business’s door. Whatever the case, just open the Google Lens app, point your phone at it, and tap the text. (Or, option B: Snap a photo of the info in question and then pull it up in the Lens app later.)

    Once Lens sees it, it’ll offer to do whatever’s most appropriate for the sort of info involved. Then, with a single tap, you’ll have the address ready to roll in a new email draft, the number ready to call or text in your dialer or messaging app, or the website pulled up and ready for your viewing in your browser — no time-wasting typing required.

    Google Lens trick #11: Translate text from the real world

    If you ever find yourself staring at a sign in another language and wondering what in the world it says, remember that the Google Lens app has a built-in translation feature. To find it, open the app, aim your phone at the text, and tap the word “Translate” along the bottom edge of the screen.

    Before you know it, Lens will replace the words on your screen with their English equivalents (or with a translation in whatever language you select, if English isn’t your tasse de thé) — practically in real time. It’s almost spooky how fast and effective it is.

    srcset="https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?quality=50&strip=all 800w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=768%2C798&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=671%2C697&quality=50&strip=all 671w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=462%2C480&quality=50&strip=all 462w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=241%2C250&quality=50&strip=all 241w" width="800" height="831" sizes="(max-width: 800px) 100vw, 800px">
    Translation on demand, courtesy of Google Lens.

    JR Raphael, Foundry

    Pas mal, eh?

    Google Lens trick #12: Calculate quickly

    The next time you’ve got a numerical challenge in front of your weary peepers, give your musty ol’ brain a break and let Lens do some good old-fashioned solvin’ for ya.

    Just open up Lens and point your phone at the equation in question — whether it’s on a whiteboard, a physical piece of paper, or even a screen in front of you. Scroll over along the line at the bottom of the Lens viewfinder screen until you see the word “Homework” (and don’t worry: Despite what that label implies, you don’t have to be an annoyingly youthful and bushytailed student to use it).

    Tap that, then tap the big Lens search icon. And with everything from basic equations to advanced math, chemistry, physics, and biology, Lens will eagerly do your calculation for you and spit back an answer in the blink of an eye.

    I won’t tell if you don’t.

    Google Lens trick #13: Scan your skin

    Here’s a weird one: If you ever have some mysterious marking on your mammal skin and find yourself fretting over whether it’s a freckle or something more nefarious, take matters into your own hands and let Lens play the role of dermatologist for you.

    Just fire ‘er up on whatever Android phone you’re holding and point the viewfinder at your >wretch-inducing wart perfectly natural epidermal abnormality, then tap that Lens search button. Before you can spit out the words “Holy moley,” Lens will give you a best guess at what you’ve got goin’ on.

    Its results aren’t scientific, of course, but they are based on matching your marking to an endless array of examples Lens seeks out on the web — so they’re a fine way to put your mind at ease while you wait for an actual doctor to examine your suspiciously spotted outer layer.

    Google Lens trick #14: Crack the codes

    ‘Twas a time when Android code-reading apps were all the rage — and plenty of folks still have ’em hangin’ around today. So long as the Google Lens app is on your phone, though, guess what? You don’t need anything more.

    Just open up Lens, aim your camera at any barcode or QR code, and poof: Lens will offer to show you whatever that code contains faster than you ask “What does QR stand for, anyway?”

    Google Lens Android: Scan codes
    Who needs a QR code reader when Lens is ready and waiting?

    JR Raphael, Foundry

    Being a mobile-tech magician has never been so satisfying.

    Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for your phone!

  4. The Google antitrust trial is “a nonsensical move by the US government that is ridiculously short-sighted, because if compromising privacy for ad revenue is bad, compromising privacy to train AI models is dangerous,” an industry analyst said Thursday.

    Paddy Harrington, senior analyst at Forrester, who specializes in security and risk, added that efforts by the US Department of Justice (DOJ) to derail the company is a “move that may have been done in the name of anti-trust and competition, but it does not appear as if anyone thought through the implications of something else.”

    That ‘something else’ raised its head this week during the remedies portion of the trial that began in August 2024, in which Judge Amit Mehta ruled against Google. Nick Turley, the product lead of ChatGPT at OpenAI, said his company would be interested in acquiring Google’s Chrome browser if the company is forced to sell it.

    Could disrupt the browser market

    Harrington countered, “this is not as simple as selling off a product; it’s a complete platform. And it’s moving from Google, where data collection is about selling ads, to OpenAI, where data collection is about training AI to then sell to a ridiculously wide variety of purposes. A ‘devil you know versus the devil you don’t know’ sort of deal.”

    Such a move, said Harrington, has the potential to completely disrupt the browser market as a whole, not just for Google. “It’s understandable why OpenAI would want it for training the AI models, but if they purchase Chrome, what happens to Chromium? While it’s the open-source project of Chrome and ChromeOS, does Google keep the project under their development arm, or does it go with the browser?”

    He pointed out that the majority of non-Chrome browsers use Chromium as their engine, so “if it stays under Google, by and large the browser market should probably remain the same. If it goes along with the sale to OpenAI, that could cause a serious disruption, as the privacy focused developers may want to distance themselves from a company that’s making money feeding their AI models.”

    It’s all about the data

    OpenAI’s interest in Chrome, said Brian Jackson, principal research director at Info-Tech Research Group, “is all about the data. OpenAI acquiring Chrome would be like Exxon acquiring an untapped oil field with the drilling already done. Just turn on the taps, and the good stuff starts flowing. Beyond that, OpenAI also has about 3.5 billion users, which it could reach instantly with its AI services.”

    It is, he said, possible to imagine an OpenAI version of Chrome that would start with ChatGPT search by default instead of Google.com, and that would offer agentic integration to complete tasks such as shopping, booking appointments, or unsubscribing from unwanted emails.

    “Such a coup would instantly transform OpenAI’s outlook from a provider of LLMs that sit behind the scenes of major tech brands to one of most prominent direct owners of customer relationships in the world,” said Jackson. “ChatGPT is already propelling it into that territory, but Chrome is still many magnitudes more well used.”

    As for the impact all of this might have on enterprise users, Harrington said that “it would make sense for Chrome Enterprise (CEB) to go along with the ‘consumer’ Chrome business, and while there are two editions — one free, one paid — the paid version really just opens up the number of functions you can manage inside the browser. The functions are there already, it’s just a matter of how they’re turned on and off.”

    All of this, he said, “is tied to Chrome itself, and Chrome Enterprise is just a rebundled version of it into an MSI (for enterprise distribution and an enterprise approach to configuration) with ADMX templates.”

    Anshel Sag, principal analyst at Moor Insights & Strategy, agreed, saying he could not imagine Chrome Enterprise being sold separately from the consumer version, “since they are fundamentally the same browser with different features and security.”

    A question of trust

    Asked about the level of trust CIOs and CISOs have in Google compared to OpenAI, he said, “there is more trust in Google, simply because the company has shown that it’s serious about security and privacy. That said, OpenAI doesn’t really have that kind of a track record, and while it does work with enterprises, it is still unclear how it might handle something like a browser, and all of the responsibilities that come with maintaining that level of security.”

    One of the biggest costs that would further impact OpenAI’s profitability is R&D for Chrome, and ensuring it remains secure, Sag added.

    Jackson said there is no doubt Chrome Enterprise would be impacted. The service, he said, is “the browser with some other enterprise management capabilities wrapped around it. It’s possible OpenAI would reach an agreement to still allow Google to own and sell its enterprise contracts there, but the product at the core of it would still be controlled by OpenAI.”

    Then, he said, “there’s the question of what OpenAI does with the data, and how the change in ownership would impact enterprise trust. When it comes to cybersecurity in the enterprise, the modern best practice approach is a zero-trust approach. Zero-trust principles say to treat all connections as potentially hostile, limiting access to systems and data based on only what is absolutely needed by an authenticated user.”

    The ’devil we know’

    For browsers, he said, “ there are a couple of ways that enterprises try to make that work. Some large enterprise clients will use Chrome Enterprise, which offers browser policy management. This is where they can limit what data the browser can access and what it can do with that data.”

    Other enterprises, said Jackson, would let users download pre-approved consumer browsers, likely including Chrome, and have IT manage them with other centralized administration tools such as Windows Active Directory or a Unified Endpoint Management solution.

    He said, “unless OpenAI tries anything too aggressive with data collection, we’ll see enterprises take the same approach to web browsers in the future. Ultimately, Google is also a large data-hungry tech giant that is trying to train its own AI models, and we’d have every reason to be as concerned about its motivations to harvest user data as OpenAI. But it’s the devil that we know.”

  5. A growing chorus of European technology executives is calling for the continent to assert control over its digital future, with Danish IT services giant Netcompany leading the latest push for technological self-reliance.

    In an open letter published on Wednesday — coinciding with the symbolic illumination of the Statue of Liberty replica in Paris a day before — Netcompany CEO André Rogaczewski explicitly challenged Europe’s dependence on foreign technology platforms and urged the region to “bring our data home.”

    “From social media to cloud infrastructure, from applications to algorithms, we are dependent on technologies developed elsewhere, by actors who may not share our values,” warned Rogaczewski, whose firm employs over 8,000 technology consultants across Europe.

    The high-profile campaign comes amid escalating tensions in global technology markets, with the European Commission recently intensifying scrutiny of US cloud providers’ market practices and ahead of upcoming EU-US discussions on trans-Atlantic data governance frameworks.

    “We are calling for European solutions — built by European companies, run on European data, and accountable to European citizens,” Rogaczewski stated, directly challenging the market dominance of American tech giants including Microsoft, Google, and Amazon Web Services, which collectively control a significant majority of Europe’s cloud infrastructure market according to industry reports.

    Europe’s strategic pivot in digital policy

    The remarks come amid a concerted push by European governments and institutions to localize control over key digital systems. Recent EU policies — the Digital Services Act, the Digital Markets Act, and the AI Act — are part of an evolving legal framework to strengthen regional oversight of platforms, algorithms, and cloud-based services.

    A month ago, leading European companies and lobbying groups — including Airbus, Element, and Nextcloud — under the umbrella of “EuroStack Initiative” signed an open letter urging the creation of an EU sovereign infrastructure fund to boost public investment in innovative technologies and build strategic autonomy in key sectors.

    “Building strategic autonomy in key sectors is now a recognised urgent imperative across Europe. As part of this common effort, Europe needs to recover the initiative, and become more technologically independent across all layers of its critical digital infrastructure,” the EuroStack letter read.

    These initiatives follow a global trend where technology is no longer seen purely through the lens of innovation or efficiency but as a strategic national asset. The US has tightened its grip on semiconductor exports to China. China, in turn, is accelerating its own domestic tech stack and enforcing data localization. In this shifting context, Europe’s historical reliance on the US and Chinese digital infrastructure has become a liability.

    Building a European tech ecosystem

    Netcompany, a publicly listed IT services provider with operations across Europe, is among a growing number of regional firms advocating for digital sovereignty. Their CEO’s comments underline the urgency to reduce reliance on US-based cloud giants and software vendors. Instead, the letter encourages a continental effort to cultivate indigenous technologies that align with European legal standards and ethical norms.

    “Technology lies at the heart of our wealth creation,” Rogaczewski said. “It drives our competitiveness and sits at the very center of how we communicate, learn, and develop as societies.”

    This vision extends beyond public discourse into concrete initiatives. GAIA-X, a European cloud infrastructure initiative, exemplifies this push toward a sovereign tech ecosystem, alongside other strategic programs including SiPearl and the EU’s AI Continent Action Plan that target capabilities in cloud infrastructure, semiconductors, and AI.

    US tech giants have not been idle in response to these sovereignty concerns. Amazon Web Services, for instance, has committed to a €7.8 billion ($8.9 billion) investment in an “AWS European Sovereign Cloud” and maintains that its approach has been “sovereign-by-design” from the beginning, with customers having “complete control over where they locate their data” within European regions and verifiable control over who can access it.

    “While complete technological independence is a complex and long-term goal, Europe is clearly building momentum toward digital and AI sovereignty,” said Shreeya Deshpande, senior analyst at Everest Group, highlighting how the coordinated nature of such efforts reflects growing momentum across Europe.

    While challenges remain, particularly in scaling and integrating across fragmented markets, the political will and regulatory backing for European tech nationalism is growing.

    Sovereignty without isolation

    Rogaczewski’s appeal reflects a growing consensus among European stakeholders that sovereignty does not mean isolation. Rather, it signals a recalibration of Europe’s role in the global digital order. Europe is seeking to maintain open markets and innovation, while ensuring that core digital infrastructure and sensitive data remain under regional control.

    “Emerging mechanisms, such as data embassies and sovereign cloud frameworks, offer a practical middle path — enabling countries to maintain legal and operational control over data and AI systems while remaining interoperable with global platforms,” Deshpande added.

    The message resonates with policymakers who see technology not just as a tool of commerce but as a pillar of democratic governance. “This places our security, sovereignty, and democracy at risk,” Rogaczewski warned, referring to Europe’s current dependency on foreign platforms.

    The lighting of the Statue of Liberty — once gifted by France to the US as a symbol of shared democratic values — served as a potent backdrop to Rogaczewski’s message. He framed his letter as both a reminder of historical ties and a warning that those values are now “under heavy pressure.” “Our modern societies are based on the very same principles of freedom and democracy,” he wrote. “We must stand united in our commitment to these values and fight for them each and every day.”