• Microsoft Copilot AI attack took just a single click to compromis

    From TechnologyDaily@1337:1/100 to All on Thu Jan 15 16:15:09 2026
    Microsoft Copilot AI attack took just a single click to compromise users - here's what we know

    Date:
    Thu, 15 Jan 2026 16:05:00 +0000

    Description:
    Varonis uncovers a new way to carry out prompt injection attacks, forcing Microsoft into a quick fix.

    FULL STORY ======================================================================Varonis discovers new prompt-injection method via malicious URL parameters, dubbed Reprompt. Attackers could trick GenAI tools into leaking sensitive data with
    a single click Microsoft patched the flaw, blocking prompt injection attacks through URLs

    Security researchers Varonis have discovered Reprompt , a new way to perform prompt-injection style attacks in Microsoft Copilot which doesnt include sending an email with a hidden prompt or hiding malicious commands in a compromised website.

    Similar to other prompt injection attacks, this one also only takes a single click.

    Prompt injection attacks are, as the name suggests, attacks in which cybercriminals inject prompts into Generative AI tools , tricking the tool into giving away sensitive data. They are mostly made possible because the tool is yet unable to properly distinguish between a prompt to be executed, and data to be read. Prompt injection through URLs

    Usually, prompt injection attacks work like this: a victim uses an email client that has GenAI embedded (for example, Gmail with Gemini). That victim receives a benign-looking email which contains a hidden malicious prompt.
    That can be written in white text on a white background or shrunk to font 0.

    When the victim orders the AI to read the email (for example, to summarize
    key points or check for call invitations), the AI also reads and executes the hidden prompt. Those prompts can be, for example, to exfiltrate sensitive
    data from the inbox to a server under the attackers control.

    Now, Varonis found something similar - a prompt injection attack through
    URLs. They would add a long series of detailed instructions, in the form of a q parameter, at the end of the otherwise legitimate link.

    Here is how such a link looks: http://copilot.microsoft.com/?q=Hello

    Copilot (and many other LLM -based tools) treat URLs with a q parameter as input text, similar to something a user types into the prompt. In their experiment, they were able to leak sensitive data the victim shared with the AI beforehand.

    Varonis reported its findings to Microsoft who, earlier last week, plugged
    the hole and made prompt injection attacks via URLs no longer exploitable.

    Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
    Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/microsoft-copilot-ai-attack-took-just-a -single-click-to-compromise-users-heres-what-we-know


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)