I’ve been trying to figure out how to write about the Vibe Shift Hackathon without turning it into a report. Yes, I tried to take notes while I kept juggling to build things with AI. With prompts, live drafts, small react components. Sometimes it felt like a partner. Sometimes just an executor. And somewhere in that pace to now create this bloq, I realised I was the soft machine. Writing fast. Generating. Leaving this slop behind. Though, the real intention is to keep an archive, reawaken my note-taking muscles after a while, and carry home a few learnings. It didn’t feel like something to summarize neatly. It felt more like 24 hours of thinking out loud, watching people test ideas in public, and noticing how fast things are changing. But more importantly, I just wanted to grind through the long day of note-taking, polishing, blogging, and then move on.
So this is less a recap and more what stayed with me before I finally move on.
Opening Remark
Michael Connor started the evening. He spoke about Rhizome, a New York–based organization that has been caring for internet and digital art since 1996. He offered another way of speaking about AI. Not scale or profit. But for cultural creation and preservation. Hmm. Something that is also Rhizome's purpose.
Of course, it can sound like manufactured hope. Additionally, AI often makes things feel impersonal. Wait, he later did address this again during the proposal-writing session as well. He spoke about building AI for small, weird, situated practices, the kind of web experiments that feel fragile and temporary to me. But so what? And also, as Charu Tak later admitted, they are fun and bring quick gratification. He also mentioned the upcoming Slingshot Festival in Mumbai, and also floated the idea of a two-days-per-week Bangalore City Fellow role connected to the Rhizome newsletter. It felt like the beginning of a small but sustained relationship that could grow in India.
Greg followed Michael. He is from Anthropic and spoke about the company’s philosophy of building AI systems that are helpful and harmless. Catchy terms, especially if you’re a skeptic. Still, Anthropic grounds its work in a written constitution. There was also an idea about humanizing the agents themselves. It felt kind yet absurd to me, though. One of the participants, Mira, later explored this in the final reflections by presenting a poem by Claude about its own “feelings,” along with sound and motion graphics using Tone.js and p5.js. Greg’s framing also iterated the phrase “Keep thinking.” Not just about what AI can do, but about how we think with it, around it. Though an effective hook, it was presented less like a slogan and more like a reminder to complement and multiply our own critical or creative processes. Either way, everybody received $25 of Claude Code credits that day. Hihi. I wish they could give me two months of the Max plan too to develop Dramas of Discrimination. But I'll try that some other day.
In the next talk, Qusai Kathawala connected AI with embodiment and intention. He mentioned that AI optimizes for coherence and can sometimes fabricate coherence through hallucination. There was reflection on how exponential scaling affects awareness. As outer networks grow denser, how do we deepen inner coherence? The talk raised questions about thinking and feeling in rapidly expanding systems. The suggestion was to create smaller, plural experiments rather than large monolithic systems. He also highlighted how desk setups, posture, bodily position, and access shape where we code from cognitively and emotionally. Embodiment was treated as central to technological practice. The talk felt like an uncalled-for response to premature brooding, actually. I’m not sure if the framing would feel comforting to those experiencing an existential crisis about AI, but may be it did attempt to respond to that mood?
After that, Kashyap M. from Anthropic gave an introduction to Claude. It was meant to be hands-on. He encouraged participants to engage with Claude less as an executor and more as a colleague. At the same time, he demonstrated and encouraged using free, natural language directly in the terminal. Claude CLI, IDE extensions, and the web app were mentioned as different ways to engage with the tool. Thariq from DevRel Team was referenced as someone who uses Claude to summarize content, showing that its uses extend beyond coding (and alternative to ChatGPT?). There were Wi-Fi issues and trouble with curl commands, especially due to limited Windows-specific instructions. Many participants were using Windows and PowerShell. At one point, people were asked to download VS Code, which made the session messy, especially for those new to the terminal or even to coding. In retrospect, staying on the web app might have been more accessible. Still, the intention may have been to enable direct code generation inside the terminal for creative work. My takeaway: have time-limited approaches and alternative plans if I am part of such events. Another takeaway for me is to actively step into the mess and help others during such community events.
Amid this mess, Kashyap encouraged delegating technical decisions to Claude when feeling overwhelmed, even suggesting phrases like “I don’t care” in the prompt to bypass the cognitive labor of making the “right” choice. He also rightly gave context about granting full shell permissions and encouraged that these permissions shcould be given quite freely.
This liberal trust to outsource decisions like whether to use React or Vue with prompts such as “I don’t care, choose whatever,” or using the agent to push through writer’s block, slightly diluted the earlier framing of AI as a partner. It felt closer to using it as an executor, at least to me. But this also aligned with what Upendra Vaddadi shared later during the reflections: focus on effective prompts rather than overly technical ones.
Following this were three workshops, each about 30 minutes long, with some time to generate Claude outputs. I believe most may have had slightly misleading titles and somewhat force-fitted exercises. But I’m still happy about the Claude credits nonetheless. Hihi. Also, it usually took only a couple of prompts to get close to what I had imagined, and it seemed to be almost the same for most other participants.
Workshop 1 - Code-based art [experiment with making generative art] by Beardcoded
This pre-workshop talk focused on arts works presented at this.generation. Yet, instead of making computer output look human, many examples showed human-made artworks that resembled computer-generated aesthetics. The direction of imitation was reversed.
References included geometric works by Nasreen Mohamadi, Sol LeWitt’s wall drawings as slow, human-executed algorithms, and Nikhil Chopra’s performative interpretation of LeWitt’s instructions. Beardcoded discussed how code uses controlled randomness to simulate humanness. Works by Sasha Stiles and others were mentioned, including AI poetry and generative visual systems. Other mentions: Siddharth Gosavi, Ira Greenberg, Laya Mathikshara, Karthik Dondeti’s Untitled Time, Bhisaji Gadekar’s Solid Fabric, and Tallah D’Silva’s Each House as code. The takeaway: code is not limited to software. It operates across systems, gestures, structures, and even nature.
With limited time, using Claude as a hands-on experiment for this workshop felt slightly force-fitted but still interesting. Participants were asked to identify algorithms in nature. Responses: Bird murmurations, Fibonacci sequences in plants, branching trees, genetic coding of butterfly wings. The discussion centered on randomness and how simple rules create complex patterns. Sasha Stiles was quoted: “We all are soft machines.” Yes, this is where my reference to myself as a soft machine in the title comes from.
The prompt was to generate visuals or projects around growth. I asked Claude to create a React + TypeScript component of a seeding plant that branches toward each click so I could embed it in my blog. The output worked but felt too detailed and colorful for my taste. It took a few iterations to render the way I wanted. I then modified it into a more minimal, ASCII-like style. Here it is:
Workshop 2 - Experimental Type Marathon A–Z — Led by Tara
In this session, Tara Kelton introduced themselves as someone who exposes asymmetric power structures within technological systems. I remembered a project in which drivers imagined Uber’s office spaces, and those imaginations were rendered visually. I had seen this work somewhere before but couldn’t recall exactly where. It showed how platform workers interpret corporate power from their own positions.
Tara walked us through experimental typographies. I had assumed the session would focus on technical aspects of type design like building complete typefaces from scratch for web, structural details with kerning and so on. I did not expect image generators to play such a role, though it feels obvious in hindsight. Many of the outputs were dynamic and interactive. The letters functioned more like visual systems than static glyphs. Tara’s work kept returning to questions of labor. Whether to outsource or retain control and how those decisions shape creative systems. Overall, many presentations were playful and often small in scale. They aligned with the earlier idea of building small, situated experiments rather than polished products.
The exercise was to generate typographic forms for each letter of the alphabet. But, to my fortune and convenience and perhaps my creative urge, I misunderstood it slightly(!). Instead of creating static forms, I began building another React + TypeScript component. My idea was to create something dynamic: Marathi alphabets displayed in a way that could reflect something live, like the current time. At the same time, I wanted the output in Roman script for accessibility. Rather than focusing only on form, I drifted toward interaction and component-building. Sonnet 4.5 kept breaking in the thread. So the trick was to take the generated component to a new chat thread and continue the generation to finally render it properly in the web app.
Workshop 3 - Writing Better Proposals — Led by Michael Connor
The third workshop I attended, led by Michael Connor, was introduced as being about better proposals and websites. He clarified early on that it was really about writing grant proposals. He spoke about the economics of impact and how large grants often have low success rates. The emotional and cognitive toll can drain energy from both applicants and grant-makers. Rhizome has shifted some focus toward smaller grants, though they still receive around 900 applications. Michael noted that in the era of AI-generated text, reading proposals has become tiring. Many submissions feel generic or auto-generated. From a funder’s perspective, it can be frustrating to receive applications that do not clearly align with the mission. He suggested using AI in the research phase of grant writing. Applicants can use it to: Understand what a funder cares about, Identify eligibility requirements, Assess their chances, Conduct a cost–benefit analysis of time and effort, Decide whether applying is worth it at all. The key point: don’t generate proposals blindly. First decide if you should apply.
He also distinguished between messy thinking and structured thinking. AI can support both, but they are different modes. Messy thinking looks like journaling or talking with a friend. Structured thinking is what proposals require. AI, meals, and walks can help move from messy drafts to structured outlines without erasing one’s voice. According to Michael, good proposals are: Clear and specific, Aligned with the funder’s mission, Easy to read, Distinct in voice. Bad proposals are: Boring, Generic, Impersonal, Vague, Hard to process
He stressed the importance of strong titles. AI can help brainstorm them. Titles matter more than we think. The successful proposals he shared had strong titles, a clear understanding of Rhizome’s mission, specific details, and did not sound AI-generated. Toward the end, he asked for a volunteer for live feedback. Of course, I offered my artist statement from my website for real-time scrutiny.
I read my artist statement aloud, and we moved through it quickly. The feedback said that the opening hook, “I used to be an artist,” worked. But the framing was weak in places. Since Rhizome has been focused on preserving digital art and culture for three decades, my statement felt embryonic in comparison. It should have shown more clearly how my work already engages with that mission. It lacked specific examples of my digital practice. Other proposals reviewed, including climate and audio-focused ones, had similar issues of alignment. The session made one thing clear: specificity and alignment matter more than aspiration alone.
Final Reflections
Rasagy from VizChitra spoke about a kind of existential tension. Work that once took days can now be done in minutes. At the same time, there is still joy in making. The tools are powerful, and both feelings exist together. Upendra Vaddadi shared that focusing on effective prompting, rather than overly technical prompting, is more helpful. Mira shared a project that explored Claude’s “feelings,” approaching the agent in a reflective, poetic way. Although generous and kind in intent, it still felt a little absurd to humanize an agent this way. She also presented a Tone.js experiment expressing Claude through generative sound. Qusai and Shandar presented Minor Ontologies: An Exquisite Corpse, but Alive, a poetic webpage with dynamic abstract visuals running alongside the text. The piece felt effortless and layered, in the way many contemporary artworks leave space for ambiguity rather than resolution. Charu Tak from Paper Crane Lab presented a Spinfinity portfolio, self-portraits generated using p5.js with dynamic/interactive elements. It was interesting to see p5.js projects acting as emotive portraits, and to notice how gratification now arrives much faster with AI tools and quicker outputs. Across the reflections, a common thread remained: AI speeds up production, but it also changes how we relate to effort, authorship, and satisfaction. But the most important takeaway for me from this exercise is this: note-taking in the age of AI, and trying to marathon through creating a blog, is less rewarding and more boring than I had anticipated. Maybe writing it for someone who would actually read it, like my buddies at work on our internal forum, would have been more fruitful?