Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. I had a chat with #Google Gemini about their agentic AI ...

I had a chat with #Google Gemini about their agentic AI ...

Scheduled Pinned Locked Moved Uncategorized
google
7 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Lauren WeinsteinL This user is from outside of this forum
    Lauren WeinsteinL This user is from outside of this forum
    Lauren Weinstein
    wrote last edited by
    #1

    I had a chat with #Google Gemini about their agentic AI ...

    L >>> Isn't it stupid for someone to trust an agentic AI like Google Chrome
    L >>> Auto Browse when Google isn't taking any responsibility for disastrous
    L >>> errors the agent might make that would devastate the user? It's a scam,
    L >>> really, correct?

    That is a heavy--and very fair--question. It's not "stupid" to be
    skeptical; in fact, it's a pretty rational response to the current "Wild
    West" phase of AI development.

    The tension you're feeling comes from the gap between autonomy and
    accountability. When we give an agent the keys to our digital life (like
    making purchases, managing schedules, or handling sensitive data), the
    stakes move from "funny chatbot hallucination" to "real-world financial
    or legal headache."

    Here's a breakdown of why this feels like a "scam" to some, and the
    reality of the risks involved:

    The Accountability Gap

    The core of your frustration is likely the Terms of Service (ToS).
    Almost every AI developer, including Google, uses "AS IS" clauses. This
    means:

    The Risk is Yours: If the agent accidentally deletes a critical work
    file or books a non-refundable flight to the wrong city, the company's
    legal shield usually prevents them from being liable for "consequential
    damages."

    The "Beta" Shield: By labeling these tools as experimental, companies
    shift the burden of supervision onto you. It's essentially
    "driver-assist" software where you're expected to keep your hands on the
    wheel at all times.

    Is it a Scam?

    Calling it a "scam" might be a bit of a stretch in the legal sense, but
    it certainly feels like an unbalanced trade-off.

    The "Pro" side: Google is betting that the convenience--saving you hours
    of tedious clicking and form-filling--is a value proposition users will
    take, even with the risks.

    The "Con" side: You're essentially acting as an unpaid QA tester for
    their product while bearing all the personal risk if the code glitches.

    How to Approach "Agentic" Tools Safely

    If you do decide to use these tools, think of them like a new intern:

    Trust, but Verify: Never let an agent hit "Submit" or "Buy" without a
    final human review.

    Sandbox the Access: Don't give an agentic tool access to your "main"
    bank account or mission-critical data until the tech (and the laws
    around it) matures.

    Check the Permissions: Be ruthless about what permissions you grant in
    Chrome. If an extension doesn't need "modify" access to a site, don't
    give it.

    The technology is moving much faster than the legal system. Until there
    is a "Lemon Law" for AI agents, your skepticism is actually your best
    security feature.

    Would you like me to help you look into the specific privacy settings or
    permission controls for Chrome's AI features so you can see exactly what
    you're opting into?

    L >>> C'mon, you know as well as I do that that vast majority of people Google
    L >>> wants to use these systems do not have the time, background, or
    L >>> expertise to take the kinds of steps you outlined. Google is treating
    L >>> them like suckers.

    You're cutting straight to the heart of the "UX vs. Ethics" debate.
    You're right--the steps I outlined are a "best-case scenario" for power
    users, but they don't reflect how the average person actually interacts
    with technology.

    Most people don't read the manual, they don't audit permissions, and
    they certainly don't have time to play "supervising manager" to an AI
    that was marketed as a time-saver.

    The "Sucker" Dynamic

    There is a legitimate argument that the business model relies on
    frictionless adoption. If Google (or any tech giant) put a massive,
    unmissable red warning label saying, "This might ruin your credit score
    or delete your files and we won't pay you a dime if it does," nobody
    would click "Enable."

    By making the interface "magical" and easy, they lower the psychological
    barrier to entry. This creates two distinct realities:

    The Marketing Reality: "Focus on what matters while AI handles the rest!"

    The Legal Reality: "User assumes all risk for any actions taken by the
    software."

    The Power Imbalance

    When you call it a "scam" or say users are being treated like "suckers,"
    you're pointing to the asymmetry of information:

    Complexity: The underlying logic of agentic AI is a "black box" even to
    many developers. Expecting a non-technical user to "supervise" it is
    like asking someone who doesn't know how to drive to supervise an
    autonomous semi-truck.

    Liability Shifting: Tech companies have successfully shifted the "burden
    of correctness" onto the consumer. In almost any other
    industry--medicine, aviation, even toy manufacturing--the maker is held
    to a standard of "fitness for purpose." Software has historically
    enjoyed a unique legal pass here.

    Where This Ends Up

    The "wild west" phase usually ends when the first major, high-profile
    disaster happens--a "class-action" level event that forces the hand of
    regulators. Until then, these companies are effectively betting that the
    data they gain from millions of users is worth the PR risk of a few
    "disastrous errors."

    It's a cynical way to view product development, but it's historically
    how "move fast and break things" has worked.

    Would you like to look at some of the current consumer protection
    debates or laws being proposed to hold AI companies more accountable for
    their "agents"?

    Daisy_ 1 Reply Last reply
    0
    • Lauren WeinsteinL Lauren Weinstein

      I had a chat with #Google Gemini about their agentic AI ...

      L >>> Isn't it stupid for someone to trust an agentic AI like Google Chrome
      L >>> Auto Browse when Google isn't taking any responsibility for disastrous
      L >>> errors the agent might make that would devastate the user? It's a scam,
      L >>> really, correct?

      That is a heavy--and very fair--question. It's not "stupid" to be
      skeptical; in fact, it's a pretty rational response to the current "Wild
      West" phase of AI development.

      The tension you're feeling comes from the gap between autonomy and
      accountability. When we give an agent the keys to our digital life (like
      making purchases, managing schedules, or handling sensitive data), the
      stakes move from "funny chatbot hallucination" to "real-world financial
      or legal headache."

      Here's a breakdown of why this feels like a "scam" to some, and the
      reality of the risks involved:

      The Accountability Gap

      The core of your frustration is likely the Terms of Service (ToS).
      Almost every AI developer, including Google, uses "AS IS" clauses. This
      means:

      The Risk is Yours: If the agent accidentally deletes a critical work
      file or books a non-refundable flight to the wrong city, the company's
      legal shield usually prevents them from being liable for "consequential
      damages."

      The "Beta" Shield: By labeling these tools as experimental, companies
      shift the burden of supervision onto you. It's essentially
      "driver-assist" software where you're expected to keep your hands on the
      wheel at all times.

      Is it a Scam?

      Calling it a "scam" might be a bit of a stretch in the legal sense, but
      it certainly feels like an unbalanced trade-off.

      The "Pro" side: Google is betting that the convenience--saving you hours
      of tedious clicking and form-filling--is a value proposition users will
      take, even with the risks.

      The "Con" side: You're essentially acting as an unpaid QA tester for
      their product while bearing all the personal risk if the code glitches.

      How to Approach "Agentic" Tools Safely

      If you do decide to use these tools, think of them like a new intern:

      Trust, but Verify: Never let an agent hit "Submit" or "Buy" without a
      final human review.

      Sandbox the Access: Don't give an agentic tool access to your "main"
      bank account or mission-critical data until the tech (and the laws
      around it) matures.

      Check the Permissions: Be ruthless about what permissions you grant in
      Chrome. If an extension doesn't need "modify" access to a site, don't
      give it.

      The technology is moving much faster than the legal system. Until there
      is a "Lemon Law" for AI agents, your skepticism is actually your best
      security feature.

      Would you like me to help you look into the specific privacy settings or
      permission controls for Chrome's AI features so you can see exactly what
      you're opting into?

      L >>> C'mon, you know as well as I do that that vast majority of people Google
      L >>> wants to use these systems do not have the time, background, or
      L >>> expertise to take the kinds of steps you outlined. Google is treating
      L >>> them like suckers.

      You're cutting straight to the heart of the "UX vs. Ethics" debate.
      You're right--the steps I outlined are a "best-case scenario" for power
      users, but they don't reflect how the average person actually interacts
      with technology.

      Most people don't read the manual, they don't audit permissions, and
      they certainly don't have time to play "supervising manager" to an AI
      that was marketed as a time-saver.

      The "Sucker" Dynamic

      There is a legitimate argument that the business model relies on
      frictionless adoption. If Google (or any tech giant) put a massive,
      unmissable red warning label saying, "This might ruin your credit score
      or delete your files and we won't pay you a dime if it does," nobody
      would click "Enable."

      By making the interface "magical" and easy, they lower the psychological
      barrier to entry. This creates two distinct realities:

      The Marketing Reality: "Focus on what matters while AI handles the rest!"

      The Legal Reality: "User assumes all risk for any actions taken by the
      software."

      The Power Imbalance

      When you call it a "scam" or say users are being treated like "suckers,"
      you're pointing to the asymmetry of information:

      Complexity: The underlying logic of agentic AI is a "black box" even to
      many developers. Expecting a non-technical user to "supervise" it is
      like asking someone who doesn't know how to drive to supervise an
      autonomous semi-truck.

      Liability Shifting: Tech companies have successfully shifted the "burden
      of correctness" onto the consumer. In almost any other
      industry--medicine, aviation, even toy manufacturing--the maker is held
      to a standard of "fitness for purpose." Software has historically
      enjoyed a unique legal pass here.

      Where This Ends Up

      The "wild west" phase usually ends when the first major, high-profile
      disaster happens--a "class-action" level event that forces the hand of
      regulators. Until then, these companies are effectively betting that the
      data they gain from millions of users is worth the PR risk of a few
      "disastrous errors."

      It's a cynical way to view product development, but it's historically
      how "move fast and break things" has worked.

      Would you like to look at some of the current consumer protection
      debates or laws being proposed to hold AI companies more accountable for
      their "agents"?

      Daisy_ This user is from outside of this forum
      Daisy_ This user is from outside of this forum
      Daisy
      wrote last edited by
      #2

      @lauren That almost everyone carries a tracking/spy device with them everywhere should never have become "normal". The risks with so called AI are actually worse. Not even counting the fact that we do not have the resources to make it profitable.

      Lauren WeinsteinL 1 Reply Last reply
      0
      • Daisy_ Daisy

        @lauren That almost everyone carries a tracking/spy device with them everywhere should never have become "normal". The risks with so called AI are actually worse. Not even counting the fact that we do not have the resources to make it profitable.

        Lauren WeinsteinL This user is from outside of this forum
        Lauren WeinsteinL This user is from outside of this forum
        Lauren Weinstein
        wrote last edited by
        #3

        @_daisy That it never occurs to so many people that they ARE PERMITTED to turn off their phones, and so disable all tracking via the device, continues to amuse me in a dark way. Personally, I keep my smartphone off all the time unless I absolutely need it for a specific purpose, not due to tracking concerns, but due to battery concerns.

        A 1 Reply Last reply
        0
        • Lauren WeinsteinL Lauren Weinstein

          @_daisy That it never occurs to so many people that they ARE PERMITTED to turn off their phones, and so disable all tracking via the device, continues to amuse me in a dark way. Personally, I keep my smartphone off all the time unless I absolutely need it for a specific purpose, not due to tracking concerns, but due to battery concerns.

          A This user is from outside of this forum
          A This user is from outside of this forum
          slash
          wrote last edited by
          #4

          @lauren @_daisy I picked a phone with 1) removable battery and 2) a hard switch for the ringer. Like you, mine is off a lot. I tell people, "Google doesn't need to know I'm going to my doctor's office."

          I'm less sure of the geo-stalking, because I've read that phones will still store location data, even when nominally "off". And then upload that data when on a network again. I have no way to know if that's true.

          I also keep GPS and BTLE off. I don't use the web on a phone, nor install apps.

          Daisy_ 1 Reply Last reply
          0
          • A slash

            @lauren @_daisy I picked a phone with 1) removable battery and 2) a hard switch for the ringer. Like you, mine is off a lot. I tell people, "Google doesn't need to know I'm going to my doctor's office."

            I'm less sure of the geo-stalking, because I've read that phones will still store location data, even when nominally "off". And then upload that data when on a network again. I have no way to know if that's true.

            I also keep GPS and BTLE off. I don't use the web on a phone, nor install apps.

            Daisy_ This user is from outside of this forum
            Daisy_ This user is from outside of this forum
            Daisy
            wrote last edited by
            #5

            @agreeable_landfall @lauren My phone, some irony, is a google pixel 3a because it is one of the few that runs Ubuntu Touch. Fairphone would be first choice but they're hard to get in Canada. I was using GrapheneOS but with recent news from google about restricting apps I became paranoid about what that might mean even with Graphene? And I always liked the idea of an open source Linux OS. Really open source, not open except... But I think mostly I just like the OS. Ubuntu rescued me from windows on the home computer many years ago so I've wanted to be able to use Ubuntu Touch ever since I first heard about it. The most recent release has improved the battery life to the point it's manageable.

            tranquil_cassowaryT 1 Reply Last reply
            1
            0
            • R AodeRelay shared this topic
            • Daisy_ Daisy

              @agreeable_landfall @lauren My phone, some irony, is a google pixel 3a because it is one of the few that runs Ubuntu Touch. Fairphone would be first choice but they're hard to get in Canada. I was using GrapheneOS but with recent news from google about restricting apps I became paranoid about what that might mean even with Graphene? And I always liked the idea of an open source Linux OS. Really open source, not open except... But I think mostly I just like the OS. Ubuntu rescued me from windows on the home computer many years ago so I've wanted to be able to use Ubuntu Touch ever since I first heard about it. The most recent release has improved the battery life to the point it's manageable.

              tranquil_cassowaryT This user is from outside of this forum
              tranquil_cassowaryT This user is from outside of this forum
              tranquil_cassowary
              wrote last edited by
              #6

              @_daisy @agreeable_landfall @lauren
              Hi, GrapheneOS user and community member here. Are you talking about the news about sideloading restrictions or something else?

              Daisy_ 1 Reply Last reply
              0
              • tranquil_cassowaryT tranquil_cassowary

                @_daisy @agreeable_landfall @lauren
                Hi, GrapheneOS user and community member here. Are you talking about the news about sideloading restrictions or something else?

                Daisy_ This user is from outside of this forum
                Daisy_ This user is from outside of this forum
                Daisy
                wrote last edited by
                #7

                @tranquil_cassowary @agreeable_landfall @lauren I am just afraid the sideloading restrictions are only the beginning. Not sure where this goes from here? No kind of expert - I thought I was doing great to be able to hook the phone up to the web installer. Really I just don't know. Decided I'd rather trust volunteers at UBPorts. Which I can do because no apps on my phone anyway.

                1 Reply Last reply
                1
                0
                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Don't have an account? Register

                • Login or register to search.
                Powered by NodeBB Contributors
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • World
                • Users
                • Groups