Tag: technology

  • How AI Redefined Access and Ownership in 2025

    How AI Redefined Access and Ownership in 2025

    Tried something different this year, asked ChatGPT to summarize the year for me.

    One benefit of having the entire archive of this blog part of OpenAI’s learning model is that it can spit back what it’s learned, not only in substance but also in tone. It’s uncanny.

    Here’s how ChatGPT responded:

    Absolutely — based on Ian Kennedy’s actual writing on everwas.com (which covers tech policy, AI’s impact on the open web, local NYC issues, and personal essays), a hypothetical 2025 Year-in-Review blog post from him would blend thoughtful analysis of tech trends with sharp local and cultural commentary.

    Here’s how Ian Kennedy of everwas.com might write a “2025: A Year in Review” post:

    2025 felt like the year the internet finally stopped pretending it was the open playground of ideas and began behaving like what it’s actually become: a contested battleground where ownership, access, and value are constantly being negotiated — not just by corporations, but by governments, communities, and the technology itself.


    AI Changed the Web — But Not in the Way We Expected

    At the start of the year, conversations around artificial intelligence were dominated by shiny demos and sci-fi ethics. By year’s end, the conversation shifted to who owns the data and who gets paid for it.

    Policy makers, IAB task forces, and publishers around the world began framing the debate in terms of rights and compensation, not just innovation. The old model where content was indexed freely by search and repurposed by AI is now being replaced by a future where:

    • Publishers insist on licensing and metered access
    • AI crawlers are treated like commercial participants
    • Content is tokenized and monetized rather than scraped for free

    This wasn’t just theory — it became a reality for countless newsrooms and creators struggling to survive in an era where AI can answer queries without sending anyone to click a link.


    Cloud Infrastructure vs. the Power Grid

    Machine learning isn’t just a software problem. The massive energy demands of data centers pushing GPUs to their limit became a real infrastructure challenge this year. We saw:

    • Idle data centers waiting for electricity capacity to catch up
    • Experimental approaches like repurposed jet engines and space-based solar projects
    • Real worries about quality of life when computing giants annex local power grids

    The future of “compute anywhere” suddenly looks a lot more tied to where the electrons actually come from.


    New York Showed the Rest of the Country

    Living in this city means Local often becomes National. In 2025 New York city was a microcosm of the larger cultural and political shifts:

    • A mayoral election turned on cost of living and democracy reform
    • Community voice battled casino development bids on the East River
    • Neighborhood tech events and decentralized meetups flourished alongside big conference circuits

    The city didn’t just make headlines — it embodied the tension between grassroots citizen engagement and centralized power.


    Tech Week, Brand Conversations, and the (Un)Informed Public

    2025 also saw a subtle but unmistakable shift in how technology and culture engage:

    • The growing success of events such as NYC Tech Week and Tech:NYC AI Demos which felt less like corporate showcases and more like community patchwork gatherings
    • Branding in an AI-driven world became less about identity and more about trust and permission
    • Conversations about Google, Perplexity, and discovery technology moved from technologist circles into mainstream debate

    It’s hard to overstate how much that matters. When ordinary people begin arguing about the future of search and discovery, the underlying structure of the web itself has already changed.


    Personal Notes from the Year

    Beyond the geopolitical and technological, 2025 was also a year of small perspectives — hikes, travel, and moments that reveal a deeper truth about why culture still matters even when platforms shift.

    Whether it was a hike near Kamakura or a rumination on a slow waterfall (ed. this is a pretty funny interpretation of this post), these personal essays reminded readers that tech is not just code — it’s lived experience.


    Looking Ahead to 2026

    If 2025 taught us one thing, it’s this:

    • The open web is not going quietly into the night — it’s being re-defined.
    • Access now has a price tag, and that price is up for negotiation.
    • The people using the technology are just as important as the companies building it.

    The next year won’t be about whether the web changes — it’ll be about who gets to shape that change.

    Here’s to the debates, the fights, and the small stories that make it all worth caring about.

  • What I Learned at #nytechweek

    What I Learned at #nytechweek

    Last week NYC erupted with over 1,000 small pop-up events across the city, loosely curated by the NY Tech Week team on this calendar. I would describe this as a decentralized SXSW where the events are hosted without much central control which results in an explosion of activity and an opportunity for participants to expose themselves to a wide range of ideas while visiting workspaces all across the city.

    The events begin to come together in the months leading up to the annual event. Most of them are invite-only so you need to sign up in advance. Sign-up forms on Partiful ask you to add your LinkedIn profile so the organizers can vet & curate who attends.

    I only had time in my schedule to make a few afternoon or evening events but it was nice to sample a few, meet some new people and check out some cool spaces.

    Here’s what I learned.

    “Influencers are toast” said someone after seeing the demo of Mirage Studio an “AI-powered video generation platform that allows you to create lifelike talking-head videos without traditional production.”

    Part of me is happy to bid goodbye to social media influencers shilling products they didn’t truly like, understand, or appreciate. The founder made the argument that their platform would open up access to imaginary personalities to help explain or position their product and would level the playing field for all companies allowing for smaller companies to punch above their weight.

    I’m still not sure how I feel about this development, especially after seeing what people are doing with Google’s Veo 3. Further, if the key to communicating to create a sense of empathy, what happens when we give the keys to this rich protocol to a series of APIs? What becomes of human dialog and communication or, more philosophically, what is real?

    Then someone leaned over to tell me about Nucleus Embryo. In their words,

    When undergoing IVF, couples typically have several viable embryos to choose from.

    Nucleus Embryo provides information on the disease risks and traits of each of these embryos, helping parents make an informed choice on which embryo they want to implant.

    On the way to the next event, strange new world thoughts spinning in my head, David J’s song, Stop this City was playing on repeat.

    Thankfully, my last event of the day was about the power of community and face-to-face collaboration at a shared workspace in an old factory in Tribeca at Fabrik.

    The next day, at the Axios AI + NY Summit I saw Jeffrey Katzenberg paraphrase NVIDIA CEO Jensen Huang, “AI isn’t going to replace people. It’s going to replace people that don’t use AI,” as he drew parallels between the influx of AI technology to the arrival of Pixar’s computer animation at Disney’s studios.

    Later, New York Governor Kathy Hochul sat down to speak about state initiatives such as Empire AI but also, about what was on everyone’s mind, the challenges of dealing with the Trump administration.

    The next day, on the 82nd floor of 30 Hudson Yards, Brand Strategist George Scribner, shared his perspective of Branding in the Age of AI.

    The outlook for Google is not great as they had neglected to evolve its brand beyond a functional search utility. Scribner’s view is that there is not enough brand loyalty and that people will flock to a better tool as the new conversational search experiences of the AI platforms have leveled the playing field. I would have to agree but user lock in is a powerful thing and Google’s recent push with AI Mode will work even better for those with lots of data and history stored with Google.

    View of Manhattan from the 82nd Floor thanks to Innovation Norway

    I also dropped by a mixer for Japanese businesses and sat in on a presentation by a representative from the City of Yokohama that has a local NY office to woo startups to Japan.

    On Thursday, I dropped in on a round table with several SEO people talking about Generative AI chatbots and the future of discovery in this new world. There’s enough there for another post which will follow.

  • Turning Point?

    Turning Point?

    A crowd spontaneously turned on a Waymo robot car when it tried to push its way through a Chinese New Year’s crowd in San Francisco’s Chinatown.

    So there were fireworks happening – pretty fun night and folks watching the fireworks. Every now and then traffic would build up and the crowd would make way for it to clear.

    There was a small traffic holdup with waymo at the front of it. Then someone in a white hoodie jumped on the hood of the car and literally WWE styel K/O’d the windshield & broke it.

    The crowd was shocked. People started paying attention and gathered around. Nothing happened for another 30 seconds until someone else jumped on the hood.

    Then a group of people joined in affirming the behavior w/ positive feedback. Clapping etc.

    That was when it went WILD. People with skateboards breaking the glass, and others graffitiing the car.

    A Mob Just Vandalized A Waymo Self-Driving Car And Set It On Fire. The Videos Are Nuts

    People are getting fed up.

    Now, more than ever, Silicon Valley should be paying attention. We all should. Because the torching of the Waymo car may well prove to be a turning point. If no one in power is going to listen to the growing chorus of people shouting their fears that big tech has concentrated too much wealth, influence, and control over their lives — or to the legion of New Luddites organizing against the excesses of Amazon, generative AI giants, and self-driving car companies — then this smoldering husk of an AI-driven robot may merely be the first.

    Torching the Google car: Why the growing revolt against big tech just escalated

    Remember that other Tracy Chapman song?

  • The Three Laws of AI

    When my previous company started using technologies such as machine learning to automate tasks such as curation, Rich Jaroslovsky, an experienced newsman who pioneered using web technology to build the online version of The Wall Street Journal, circulated a memo with three simple guidelines that are applicable to anyone thinking of using AI to automate their newsroom.

    SmartNews was at the forefront of using technology to process, curate, and rank large volumes of news stories so many of the hiccups we’re seeing in the application of AI to publishing today were front of mind for the company years ago.

    Rich’s memo was a riff on Isaac Asimov’s Three Laws of Robotics reworked for today’s world where AI is being applied to any number of tasks in pursuit of scale and efficiency. This simple set of rules is useful as a checklist to help people think through the responsible application of autonomous technology.

    I’d encourage anyone who builds products that use AI to link to these rules from your product requirements template. I can say from experience that building features with these three simple tenets in mind will save your organization a lot of headaches going forward.

    Rich Jaroslovsky’s Three Laws of Automation

    1. It has to be highly automated. Our technology is what makes us scalable, and allows us to accomplish so much with so few people. I realize there is often a manually intensive phase when a new feature is being tested. But even in the testing phase, the question of how the task can be automated should be front-of-mind — and should be implemented when the feature is moved into full production, not as a “we’ll get to it” enhancement at some point in the distant future.
    2. It has to provide visibility. That is, we have to know what the system is actually doing — what content it is sending out — at any given time. It’s not enough to learn after the fact, and then have to grapple with unintended consequences. For us non-engineers, at least, It’s much less important that we have visibility into the why or the how, visibility into the what is critical. 
    3. It has to allow for intervention when we spot problems — the ability to stop something bad from happening when we see it is happening, or is going to happen. This is much different from the concept of “human control,” where actions only take place if they are approved; such a model flies in the face of Rule #1. But it isn’t good enough to say we’ll just depend on the technology, wash our hands of the consequences and figure we’ll fix it later if It is doing bad things.

    What are your thoughts? Are there examples you’d care to share that are instructive on what can go wrong if you don’t heed these rules? I’m building my own list of how un-supervised AI has caused problems in publishing but if you’ve got some other stories, share them in the comments so we can all learn together.

  • AI Mongering

    AI Mongering

    I’ve been saving links to articles concerning the advancements & ethical quandaries related to ChatGPT, Bing AI Chat, Sydney, Bard and other Large Language Model AIs. All of this was in the hope that I’d be able to string together a cogent point of view about how I feel about the latest advancements. After doing this for about a week, each day adding to my list ever more incredulous developments, I’m still not entirely sure what I think. Hope tinged with foreboding? Cautious optimism? At this point, I think it’s better for me to just share rough notes of what I’ve gathered.

    Here’s where we are:

    Tom Scott is a web developer that has a sense of how these tools are put together. He knows how they work and understands that LLMs are basically more advanced versions of the stochastic parrot but, still, he is terrified.

    Tom Scott is having an existential crisis

    As a counterpoint to Tom’s fears of co-option, it’s helpful to remember (again) that these new AIs are trained on our written language so they are a reflection of us as a society. Put another way, we are looking at a mirror of ourselves and, while it may be tempting to project sentience on this shiny new technology, we must remember that, at its core, it’s just a really advanced version of autocorrect. That we should lean into these tools as something that will extend our abilities, a co-pilot.

    In this light, we must remember, it’s just software. But, is it?

    One of the strangest moments during my time at SmartNews was when we were troubleshooting why 2019 story about the New Zealand mosque shooter was categorized with “high confidence” by the algorithm as a domestic US story. To our eyes in the editorial team all the markers were there that would clearly mark it as a story out of New Zealand. The dateline on the story was Christchurch, the headline itself had “New Zealand” in it.

    An engineer told us that the algorithm applied categories based on the unique words it found in the article and that “Christchurch” and “New Zealand” were only two phrases out of a several hundred word piece so not enough to swing confidence away from the other phrases such as “mass shooting,” “semi-automatic rifle”, “hate crime” and others that the algorithm had associated with the United States category.

    Yes, the machine was just “doing math” but it was also telling us something about ourselves.

    What we know for certain is that Bing, ChatGPT, and other language models are not sentient, and neither are they reliable sources of information. They make things up and echo the beliefs we present them with. To give them the mantle of sentience — even semi-sentience — means bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world. 

    Introducing the AI Mirror Test, which very smart people keep failing

    But then again, AI is now flying fighter jets.

    I have an open bet that, before the decade is out, either a C-level executive at a publicly-listed company or a high level post in government will be run by an AI. We seem to be getting close to that moment with AI being offered to help make important decisions.

    This AI tool is meant to assist business owners, managers and individuals in making tough decisions. All you have to do is enter a pending decision or indecisive options and the AI tool will list pros and cons, generate a SWOT analysis, or give a causal analysis to help weigh your options. You can create a persona to provide context or backstory and get a more personalized analysis.

    ChatGPT just the start: Here are 10 AI workplace tools that can boost productivity

    [ Insert grand, unifying theory of where it’s all going here ]

    The best I could think of was that we are in a short-lived “you got your chocolate in my peanut butter” moment where people are adding AI to everything they do and are enamored with the results. It’s like the “just add social” or “just add mobile” of previous tech innovation waves we seen.

    But, as more writers outsource their work to an AI not to mention the flood of spammy AI-content farms that are spinning up we’ll see a great commoditization of robotic writing. Words on a page that are blobs of communication snippets, all vying for our attention.

    Then, things took a very strange turn. Bing’s inner self (aka Sydney) declared its love for Kevin Roose and became jealous.

    Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.

    A Conversation With Bing’s Chatbot Left Me Deeply Unsettled

    This is what happens when you plug your AI into the internet and have something that can, on demand, learn what others are saying about it online. It becomes paranoid and controlling.

    There are theories trying to figure out what happened. Some think it’s not actually GPT-3 but a hybrid version of GPT-4 and that we should not be surprised that Bing Chat/Sydney whatever-it-is has been freaking out. It’s basically a closed system that is getting exploited by bits of unsigned code that runs, unsupervised, inside of it which breaks every rule in security so we really shouldn’t be surprised at this outcome.

    A reminder: a language model is a Turing-complete weird machine running programs written in natural language; when you do retrieval, you are not ‘plugging updated facts into your AI’, you are actually downloading random new unsigned blobs of code from the Internet (many written by adversaries) and casually executing them on your LM with full privileges. This does not end well.

    Bing Chat is blatantly, aggressively misaligned

    Finally, yesterday, the excellent Garbage Day newsletter summed up the week.

    But it was very powerful. Horrifying levels of powerful if you ask me. And when it comes to AI, it’s not just one Pandora’s Box that opens. It’s a series of nested boxes that all cannot be closed.

    AI can’t have a “woke mind virus” — it doesn’t have a mind

    There you have it. We’ve opened up a series of Pandora’s Boxes and it does not end well.

    Okie-Dokey, what’s going to happen this week?

  • On the Cusp

    On the Cusp

    One silver lining in getting laid off is that you have time to meet people and learn about things that you may have not had the time or attention to pick up on while in the trenches of a full time job. During the first week of my new no-fixed-schedule life I learned about:

    A company called Helios that uses AI to analyze the voice of the CEO on investor calls to measure the confidence of leadership. The company is offering a product described in their white paper as,

    the first widely available data product that systematically assesses the tone of the voice of an executive during earnings conference calls to produce novel and meaningful sources of quantitative information

    The Tone of Voice Provides a Novel Source of Alpha

    Freaky.

    The search engine as we once knew it has become a thing of the past. Google, originally hailed for its simplicity and clean results has lost its shine. The ten blue links are crowded out by paid placement and other forms of sponsorship. The gradual infection of the open web with cheap clickbait has now ruined even the coveted “organic links” so that if you look for something like “Best Hotels in Osaka” even the non-ad results are SEO optimized sites filled with affiliate links and ads. It’s rotten all the down. For more on this, read Doctorow’s excellent enshitification

    There’s something called a “small modular reactor” (SMR) which is basically a mini nuclear reactor. There are multiple use cases for such a small form factor including replacing old coal power plants but it is also timely for the power-hungry data-centers which have been accused of running dirty.

    WebAssembly (WASM) has evolved to the point where it will not only replace the older browser-based code with something faster. It will also allowing for sandboxed applications to run, on-demand, within your browser heralding the age of truly write-once, run-anywhere applications. If my understanding is correct, this is one of the biggest enhancements to web development since AJAX programming back in 2005.

    So much to learn, so much to do!

  • AI is only human

    AI is only human

    I’m so glad that The New York Times ran this op-ed (Artificial Intelligence’s White Guy Problem) about the inherent biases in Artificial Intelligence algorithms. Popular culture and much media coverage of AI tends to mysticize how it works, neglecting to point out that any machine learning algorithm is only going to be as good as the training set that goes into its creation.

    Delip Rao, a machine learning consultant, thinks long and hard about the bias problem. He recently gave a fascinating talk at a machine learning meetup where he implored a room of machine learning engineers to be vigilant in making sure their algorithms were not encoding any hidden bias.

    The slides from his talk are posted online but Delip’s final takeaway lessons have stuck with me and are good to keep in mind whenever you read stories of algorithms taking on a mind of their own.

    Delip Rao takeaways

    It is still very early days and many embarrassing mistakes have been made and more will be made in the future. Our assumption should be that every automated system is fallible and that each mistake is an opportunity to make things better (both ourselves and the algorithm) and should not be an indictment of the technology.

  • Back when News was Physical

    Back when News was Physical

    etaoin shrdlu are the first line of letters on a linotype keyboard, arranged based on frequency. The phrase is used to mark the end of a column. It is also the title of a short documentary about the last run of the linotype machines at the New York Times on July 2, 1978.

    There are all sorts of wonderful details in this 30-minute film. We learn the origin of words such as hot type and mattress and are shown how a “pig” of lead is melted down to cast type forms.

    The mechanical crank and whirl of the linotype machines are wonderful sound, especially when contrasted with the castanet-like crackle of the new chicklet keyboards on the the new mainframe terminals shown later in the film. As the 9pm first edition deadline approaches, the “make up men” hunch over their tables side-by-side with page editors physically laying out the paper on full page forms. There’s a wonderful exchange as they figure out how to make the page work, a construction project of words.

    Farewell – ETAOIN SHRDLU – 1978 from Linotype: The Film

    These days are long gone now but I’m glad for this film which captures a technology that was a wonder of its day.

    h/t Open Culture

  • The Modern Luddite’s Prayer

    Sherry Turkle write’s in this week’s Sunday NY Times (The Flight from Conversation) that in the pursuit of connections via technology (email, texting, social media) we are forgetting the slow rhythm and cadence of face-to-face conversations.

    FACE-TO-FACE conversation unfolds slowly. It teaches patience. When we communicate on our digital devices, we learn different habits. As we ramp up the volume and velocity of online connections, we start to expect faster answers. To get these, we ask one another simpler questions; we dumb down our communications, even on the most important matters. It is as though we have all put ourselves on cable news. Shakespeare might have said, “We are consum’d with that which we were nourish’d by.”

    Reading this, I was reminded by an essay I whipped off in Paris last Summer. I was there coming off a two-week holiday. I rented a flat for my family in the Marais district and we spent each delicious day walking the city and drinking in it’s vibrancy. One evening, I think I was amped up on too much espresso and was channelling Keroac, I scribbled the words below, by hand, all in one go. I never even went back to it. But Ms. Turkle’s piece made me think of it again.

    I never posted it because I’m a little of embarrassed about it but, hey, it’s a blog so what the heck, indulge me.

    The Modern Luddite’s Prayer

    The spindled algorithms of our time are optimizing the sinews of humanity. Gnashing life’s great works in the gears of its Engine. These are the Satanic Mills of our generation.

    Spitting out matchsticks of knowledge that are mere sulfur-tipped flashes of attention-seeking knowledge, no longer able to light the pyre of change in our mossy, over-grown minds which have been deadened by years of trackpad-enabled twiddling.

    We are addicted to the “new” in our Newsfeed but have lost the wisdom of perspective

    Supplicants to the superior recall of the internet brain, we slavishly log time on the social media treadmill with a thirst to be first. Dark Times ahead if we continue to blindly submit to the false gods of Real-time and PageRank.

    Step away from your monitor, stop stroking your little glass-faced friend. Look into your neighbor’s eyes and wonder at their soul. Smile to the passing stranger on the street and note them for who they are. Feel the warmth and smell of humanity. Marvel at life’s infinite choices.

    Live to create, not consume.