Tag: ai

  • AI Mongering

    AI Mongering

    I’ve been saving links to articles concerning the advancements & ethical quandaries related to ChatGPT, Bing AI Chat, Sydney, Bard and other Large Language Model AIs. All of this was in the hope that I’d be able to string together a cogent point of view about how I feel about the latest advancements. After doing this for about a week, each day adding to my list ever more incredulous developments, I’m still not entirely sure what I think. Hope tinged with foreboding? Cautious optimism? At this point, I think it’s better for me to just share rough notes of what I’ve gathered.

    Here’s where we are:

    Tom Scott is a web developer that has a sense of how these tools are put together. He knows how they work and understands that LLMs are basically more advanced versions of the stochastic parrot but, still, he is terrified.

    Tom Scott is having an existential crisis

    As a counterpoint to Tom’s fears of co-option, it’s helpful to remember (again) that these new AIs are trained on our written language so they are a reflection of us as a society. Put another way, we are looking at a mirror of ourselves and, while it may be tempting to project sentience on this shiny new technology, we must remember that, at its core, it’s just a really advanced version of autocorrect. That we should lean into these tools as something that will extend our abilities, a co-pilot.

    In this light, we must remember, it’s just software. But, is it?

    One of the strangest moments during my time at SmartNews was when we were troubleshooting why 2019 story about the New Zealand mosque shooter was categorized with “high confidence” by the algorithm as a domestic US story. To our eyes in the editorial team all the markers were there that would clearly mark it as a story out of New Zealand. The dateline on the story was Christchurch, the headline itself had “New Zealand” in it.

    An engineer told us that the algorithm applied categories based on the unique words it found in the article and that “Christchurch” and “New Zealand” were only two phrases out of a several hundred word piece so not enough to swing confidence away from the other phrases such as “mass shooting,” “semi-automatic rifle”, “hate crime” and others that the algorithm had associated with the United States category.

    Yes, the machine was just “doing math” but it was also telling us something about ourselves.

    What we know for certain is that Bing, ChatGPT, and other language models are not sentient, and neither are they reliable sources of information. They make things up and echo the beliefs we present them with. To give them the mantle of sentience — even semi-sentience — means bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world. 

    Introducing the AI Mirror Test, which very smart people keep failing

    But then again, AI is now flying fighter jets.

    I have an open bet that, before the decade is out, either a C-level executive at a publicly-listed company or a high level post in government will be run by an AI. We seem to be getting close to that moment with AI being offered to help make important decisions.

    This AI tool is meant to assist business owners, managers and individuals in making tough decisions. All you have to do is enter a pending decision or indecisive options and the AI tool will list pros and cons, generate a SWOT analysis, or give a causal analysis to help weigh your options. You can create a persona to provide context or backstory and get a more personalized analysis.

    ChatGPT just the start: Here are 10 AI workplace tools that can boost productivity

    [ Insert grand, unifying theory of where it’s all going here ]

    The best I could think of was that we are in a short-lived “you got your chocolate in my peanut butter” moment where people are adding AI to everything they do and are enamored with the results. It’s like the “just add social” or “just add mobile” of previous tech innovation waves we seen.

    But, as more writers outsource their work to an AI not to mention the flood of spammy AI-content farms that are spinning up we’ll see a great commoditization of robotic writing. Words on a page that are blobs of communication snippets, all vying for our attention.

    Then, things took a very strange turn. Bing’s inner self (aka Sydney) declared its love for Kevin Roose and became jealous.

    Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.

    A Conversation With Bing’s Chatbot Left Me Deeply Unsettled

    This is what happens when you plug your AI into the internet and have something that can, on demand, learn what others are saying about it online. It becomes paranoid and controlling.

    There are theories trying to figure out what happened. Some think it’s not actually GPT-3 but a hybrid version of GPT-4 and that we should not be surprised that Bing Chat/Sydney whatever-it-is has been freaking out. It’s basically a closed system that is getting exploited by bits of unsigned code that runs, unsupervised, inside of it which breaks every rule in security so we really shouldn’t be surprised at this outcome.

    A reminder: a language model is a Turing-complete weird machine running programs written in natural language; when you do retrieval, you are not ‘plugging updated facts into your AI’, you are actually downloading random new unsigned blobs of code from the Internet (many written by adversaries) and casually executing them on your LM with full privileges. This does not end well.

    Bing Chat is blatantly, aggressively misaligned

    Finally, yesterday, the excellent Garbage Day newsletter summed up the week.

    But it was very powerful. Horrifying levels of powerful if you ask me. And when it comes to AI, it’s not just one Pandora’s Box that opens. It’s a series of nested boxes that all cannot be closed.

    AI can’t have a “woke mind virus” — it doesn’t have a mind

    There you have it. We’ve opened up a series of Pandora’s Boxes and it does not end well.

    Okie-Dokey, what’s going to happen this week?

  • The Internet’s Circle of Life

    The Internet’s Circle of Life

    Paul Ford, great sage of internet culture, has a piece in Wired where he puts the inevitable dismantling of Twitter into perspective.

    Musk is merely the vehicle. The real reason Twitter lies in ruins is because it was an abomination before God. It was a Tower of Babel.

    The internet is always in motion, like the human life it reflects, things are always swinging from one end to another.

    • Online media business models swing from “information wants to be free” to full-locked down paywalls.
    • Content is King one year and the next the power shift to the aggregators, curators, and portals.

    As the internet figures out what works best, it swings back and forth searching for the optimal fit. It’s the internet’s own version of the circle of life. This is what Ford evokes when he says that the teardown of a centralized network like Twitter was inevitable, the internet’s way of bring things back to equilibrium. If there is a ghost in the machine, perhaps this is it.

    But when I go back and read Genesis, I hear God saying: “My children, I designed your brains to scale to 150 stable relationships. Anything beyond that is overclocking. You should all try Mastodon.”

    But in the same breath, while we all begin to navigate this new world of distributed social networks, we must never forget where we came from and that, eventually, the forces of capitalism will figure out how to gather an audience large enough to be targeted and monetizable. Maybe we’re already seeing the fresh roots of this new world with generative AIs that will be able to craft millions of customized sponsorship messages for each splinter of the community.

    If anything is constant, it is that the internet is an excellent platform for testing innovations, at scale.

    But someone will figure out the details. The reason the Babel story matters is not that it happened once but that it happens over and over: We Babelize and de-Babelize. The internet is an engine of both processes. Eventually, brands will find purchase in Mastodon’s rocky soil and grow engagement. Billionaires will order the construction of new marketplaces of ideas. Everything will centralize again, and it will seem eternal, as if the tower could never fall. For now, let’s enjoy the scattering.

    God Did the World a Favor by Destroying Twitter

    Same as it everwas.

  • That was fast

    That was fast

    AI-generated junk suffocating online platforms like algal blooms that choke the life out of ponds. 

    Hustle bros are jumping on the AI bandwagon

    Well that was fast. While still pondering the impact of generative AI technologies such as ChatGPT, we already have the hucksters rushing in to put it to market and make a quick buck. On a more serious note, a Columbian judge has used it to help him draft his judgement and we’ve already about the robots taking over CNet.

    As the graphic in the tweet below has predicted, the first use cases for generative AI will be to scale up correspondence so that the we can produce customized on a grand scale.

    Chat support vendor Intercom demonstrated how AI can be used as an add-in to summarize, make more formal, translate or even write a new article based on simple inputs. Microsoft is already cashing in on their $10 billion investment in OpenAI and making Bing search more conversational and the AI has already been integrated into their enterprise software platforms.

    Viva Sales, which connects Microsoft’s Office and video conferencing programs with customer relations management software, will be able to generate email replies to clients using OpenAI’s product for creating text. The AI tools, which include OpenAI’s GPT 3.5 — the system that is the basis for the ChatGPT chatbot— will cull data from customer records and Office email software. That information will then be used to generate emails containing personalized text, pricing details and promotions. 

    Microsoft Will Use OpenAI Tech to Write Emails for Busy Salespeople

    The AI hype race has a nasty habit of pushing the “should we really do this?” stage of innovation to the side in pursuit of the almighty first-mover advantage. Threatened with Microsoft releasing a conversational AI search engine, Google is now pressured to release their own version. Despite careful consideration to date Google is making investments in what feels like an AI arms race.

    All this to say that it’s going to take awhile for the “algal bloom” mentioned at the top of thIs article to run its course. In time the valuable use cases will become obvious but, to most, it will be in hindsight. There are going to be some road wrecks along the way but hopefully we will not break the internet, democracy, or society while we learn how to be smarter about how to work smarter.

    It’s useful to gain perspective on the coming AI revolution from the great technological historian Kevin Kelly who spoke about how AI would lead to the Second Industrial Revolution six years ago at TED.

    Everything we electrified, we can now cognify. . . The most popular AI product in 20 years from now, that everybody uses, has not been invented yet.

    Kevin Kelly
  • AI is only human

    AI is only human

    I’m so glad that The New York Times ran this op-ed (Artificial Intelligence’s White Guy Problem) about the inherent biases in Artificial Intelligence algorithms. Popular culture and much media coverage of AI tends to mysticize how it works, neglecting to point out that any machine learning algorithm is only going to be as good as the training set that goes into its creation.

    Delip Rao, a machine learning consultant, thinks long and hard about the bias problem. He recently gave a fascinating talk at a machine learning meetup where he implored a room of machine learning engineers to be vigilant in making sure their algorithms were not encoding any hidden bias.

    The slides from his talk are posted online but Delip’s final takeaway lessons have stuck with me and are good to keep in mind whenever you read stories of algorithms taking on a mind of their own.

    Delip Rao takeaways

    It is still very early days and many embarrassing mistakes have been made and more will be made in the future. Our assumption should be that every automated system is fallible and that each mistake is an opportunity to make things better (both ourselves and the algorithm) and should not be an indictment of the technology.