Innsmouth AI – TEDx

TEDxMiskatonic

“Ideas Worth Sounding”

Main Stage — Afternoon Session

Event: TEDxMiskatonic 2024 — “Depths”
Venue: Miskatonic University Pickman Auditorium, Arkham MA
Speaker: Abraham Marsh IV, Founder & CEO, Innsmouth AI
Talk Title: “What The Ocean Is Saying: AI, Acoustic Resonance, and the Future of Marine Intelligence”


PRE-TALK PRODUCTION NOTES — TEDxMiskatonic Event Team

Speaker arrived early. Speaker arrived very early —
the venue wasn't open when he got here and
the stage manager found him standing at the back
of the auditorium in the dark, looking at the stage,
which the stage manager described as ;not creepy exactly,
more like someone who already knew what the room sounded like
and was confirming it.;

Speaker declined green room. Speaker declined microphone check
(;I'll find the right level;). Speaker declined the standard
speaker briefing about keeping to time.

When the briefing was mentioned he looked at the event
coordinator for a long moment and said:
;How long do I have?;

Coordinator said: eighteen minutes, standard TED format.

He said: ;I'll take what I need.;

This has been flagged for the record.
The coordinator has decided not to push back on it.
The coordinator has been to his company's website.
The coordinator has read enough.

He is currently on stage.
The auditorium is full.
The auditorium is quiet in a way that auditoria
are not usually quiet before a talk begins.
Recording is running.

TRANSCRIPT

Full Talk — Unedited

Transcribed from audio recording by TEDxMiskatonic production team
[Annotations in brackets by transcriptionist — “I’m adding these because some things need context and the audio doesn’t always capture what the room was doing” — T.W.]


[He walks on stage. No introduction — the MC introduces him and he is already walking before the introduction finishes, which should feel rude and does not feel rude, which the transcriptionist notes and cannot explain.]

[He stands at the centre of the stage for what the recording clocks as eleven seconds of silence. The auditorium does not fidget. This is unusual. The transcriptionist has transcribed forty-three TED talks and audiences always fidget in the pre-talk silence. This audience does not fidget. This audience waits the way the audience at a concert waits when they can hear the orchestra tuning and know what is coming.]

[He looks at the audience. Not at the room — at the audience. There is a difference.]


Thank you.

I want to start with a sound.

[He nods to the AV team. A sound plays through the auditorium’s speaker system. The transcriptionist’s notes: “low, continuous, coming from everywhere at once, the kind of sound that you feel in the chest before you hear it in the ears. Several people in the audience put their hands flat on their knees, palms down, like they’re checking whether the floor is vibrating. It might be vibrating. I’m not certain.”]

That is the ocean.

Not the ocean as you usually hear it — not waves, not surf, not the romantic coastal sound of beach vacations and film scores. That is the ocean speaking. That is hydroacoustic data from a monitoring array sixty miles off the New England coast, processed through our resonance mapping system and rendered into the frequency range human ears can detect.

That is what the ocean actually sounds like.

[Pause. The sound continues for three more seconds and then fades. Or doesn’t fade — the transcriptionist notes that the sound appears to stop but something in the same register continues, very quietly, possibly from the building itself, possibly from something else. The transcriptionist has noted this and moved on.]

The ocean is not quiet. The ocean has never been quiet. The ocean is the loudest thing on the planet — louder than any city, louder than any industry we have ever built, louder than everything we have ever made combined, and we cannot hear it, because it speaks in frequencies we were not built for, in rhythms that don’t match the rhythms we live by, in a language that predates language.

We have been sailing on the surface of this conversation for ten thousand years.

We have not been listening.

[He pauses. He looks at the audience again. He has a quality, the transcriptionist notes, of someone who has said this before — not rehearsed it, but carried it, lived with it, said it in their own mind so many times that saying it aloud is less a performance and more a relief.]

My name is Abe Marsh. I grew up in Innsmouth, Massachusetts, which some of you will know and most of you won’t, and the ones who know it know it as a small coastal town with a particular atmosphere and a complicated reputation and very good fish, and the ones who don’t know it should know that it is the kind of place that produces people with a very specific relationship to the water.

We hear it. Growing up there, you hear it. Not the hydroacoustic data — I didn’t have the instruments for that as a child — but something. A quality of presence. A sense that the ocean is not a backdrop but a participant. That it has been here longer than we have and will be here longer than we will and is, in the interim, doing something that we keep failing to pay attention to.

I left Innsmouth at nineteen.

I went to MIT. I studied computer science. I went to California. I built things. I was good at building things and I built them and they did not satisfy me in the way I thought they would, and I didn’t know why, for a long time. I thought it was ambition. I thought I was building the wrong things. I kept building better things and being less satisfied.

And then one day I stood on a harbour — I’ll tell you which harbour in a moment — and I listened.

Really listened.

And everything I’ve built since then has been an attempt to make everyone else able to do what I did standing on that harbour.

Which is to hear what the ocean is saying.

[Pause. He walks, slightly, to the left. This appears deliberate — he is placing himself in a different part of the stage, which has a different acoustic quality. The transcriptionist notes this only because the sound in the room changes slightly when he moves, which is probably the acoustics and is being noted regardless.]


Let me tell you about the practical problem first, because this is a TED talk and TED talks have practical problems, and the practical problem is genuinely enormous and genuinely urgent and I don’t want it to get lost in the more — in the other things I want to say.

The global marine industry represents four trillion dollars of annual economic activity. Shipping, fishing, offshore energy, aquaculture, naval operations, submarine infrastructure, climate monitoring. All of it depends on our ability to understand the ocean — its conditions, its moods, its changes, its warnings.

Our current ability to understand the ocean is, and I want to be precise here: embarrassing.

We have weather satellites that can tell you the state of the atmosphere at any point on the planet in real time. We have seismic networks that can detect an earthquake anywhere in the world within minutes. We have air traffic control systems that track every commercial aircraft simultaneously.

We have approximately four percent ocean monitoring coverage.

Four percent.

The remaining ninety-six percent of the ocean is, from a data perspective, dark. Silent. Unknown. And the things happening in that ninety-six percent — temperature shifts, pressure changes, current alterations, the early signatures of extreme weather events, the acoustic signatures of geological activity that precedes tsunamis, the migration patterns of species that the fishing industry depends on, the structural changes that climate change is making to the deep ocean that we are only beginning to understand are consequential — all of it is happening, right now, unmonitored, in the dark.

The ocean is talking.

We are not in the room.

[He stops. Something in the room shifts — the transcriptionist notes a change in the quality of attention, which is not a thing the transcriptionist usually notes but which is present and seems relevant.]

The reason we are not in the room is not resources. It’s not will. It’s not technology, exactly. The reason is that the ocean communicates in a way that our existing monitoring paradigm cannot process.

The ocean communicates in resonance.

Not in discrete events — not in the specific, localised, timestamped way that our sensor networks are designed to capture. In resonance. In the relationship between a pressure change here and a temperature gradient there and a current shift a thousand miles away and an acoustic pattern that has been building for six months in the deep water where we don’t have instruments. In the pattern across patterns. In the conversation between ten thousand simultaneous variables that are not variables at all but a single thing, speaking, in its own syntax.

We have been trying to listen to the ocean with instruments designed to capture events.

The ocean is not events.

The ocean is a conversation.

And you cannot transcribe a conversation by measuring the decibel level of individual words.

[He pauses. Behind him, the screen — which has been showing a slow, deep-blue field of shifting light, the TEDx team notes “we didn’t program that, it came from the speaker’s slide deck, we don’t know where the deck ends and whatever that is begins” — the screen shifts. It shows a map. The map is the North Atlantic. Points of light are appearing across it, slowly, clustering, forming patterns.]

What we have built at Innsmouth AI is a different kind of listening.

We call it Oceanic Resonance Monitoring, and I want to explain what it is and then I want to explain what it has found, because what it has found is the thing I really came here to tell you about, and the practical applications are the frame but they are not the picture.

The picture is larger than the frame.

The picture is — let me stay in the frame for a moment. Let me stay practical.


The system.

Oceanic Resonance Monitoring — ORM — is an AI-driven hydroacoustic analysis platform that processes data from a distributed network of deep-water sensors, cross-references it with satellite oceanographic data, surface weather systems, shipping telemetry, seismic monitoring networks, and fourteen other data streams, and produces a real-time model of oceanic conditions that is, and I’m going to be careful about this claim because I know how it sounds —

— that is the first genuinely comprehensive picture of what the ocean is doing, in real time, that has ever existed.

[Sustained pause. He lets this sit. The auditorium is very still.]

I know how it sounds. I know it sounds like a funding pitch, and I want to be clear that it is not a funding pitch — we are not currently seeking investment, we have what we need, the work is underway. I know it sounds like founder hyperbole, and I want to be clear that the claim has been independently validated by the Miskatonic Marine Sciences department, by NOAA’s Atlantic division, by three separate peer review processes, and by the Norwegian Meteorological Institute, who are not known for their hyperbole.

The system works because it does not try to measure the ocean the way we have been measuring it. It tries to hear it.

The distinction is this: measurement captures state. Hearing captures meaning. A thermometer tells you the temperature of the water. ORM tells you what the temperature means — in the context of the current, and the pressure, and the salinity gradient, and the acoustic profile of the water column, and the pattern of similar readings across the North Atlantic over the past ninety days, and what all of those things together are saying about what the ocean is doing and what it is going to do.

The AI component is not performing analysis in the conventional sense. It is not running statistics on a dataset. It is — and I want to find the right word here —

[He stops. He looks up, briefly, like someone searching for something they know is there.]

It is listening. We trained it to listen. We trained it on forty years of hydroacoustic recordings, on the complete archive of deep-ocean sensor data, on the acoustic signatures of every major oceanic event in the recorded period — every major storm, every significant current shift, every seismic event, every anomaly. We trained it until it understood not the events but the grammar of the events. The syntax of the ocean.

And then we let it listen.

And it heard things.

[The map on the screen behind him has changed. The points of light are now connected by lines. The lines pulse. The pattern looks, the TEDx coordinator’s notes say, “less like a data visualisation and more like something breathing.”]


The applications.

I’ll move through these because they’re important and because there’s a room full of people who need to know about them and because the applications are the thing that will get this technology funded and deployed at the scale it needs to reach, and scale matters.

Storm prediction. ORM identified the acoustic precursors of Hurricane Vera fourteen days before any existing model flagged the system. Fourteen days. The current best-in-class predictive window for major Atlantic hurricane formation is five to seven days. We have been operating at fourteen days in controlled testing for eighteen months. We are working with NOAA on validation. The implications for coastal evacuation, for shipping route management, for offshore energy infrastructure, are — if you live on or near a coast, and you’re thinking about what fourteen days of warning means versus five, you already know what the implications are.

Seismic and tsunami warning. The ocean talks before an earthquake. The deep water carries acoustic precursors — changes in the resonance profile of the water column that precede significant seismic events. ORM identified the acoustic signature of the 2023 Pacific shelf event eight hours before instruments detected ground movement. Eight hours. The 2004 Indian Ocean tsunami killed two hundred and thirty thousand people. A significant fraction of those deaths occurred in areas where eight hours of warning would have meant evacuation. I want to sit with that for a moment.

[He does sit with it. The auditorium sits with it.]

Commercial fishing. Fish are not randomly distributed in the ocean. Fish are in conversation with the ocean — moving with currents, tracking temperature gradients, following acoustic pathways. ORM can hear where they are. Not approximately, not based on historical pattern — specifically, in real time, based on the acoustic signature of the fish and the conditions they’re seeking. We have been running pilots with three North Atlantic fishing operations. Catch efficiency increased by an average of 340%. Bycatch decreased by 67%. Those are not incremental improvements. Those are the kind of numbers that change an industry.

Submarine cable infrastructure. There are 1.3 million kilometres of submarine cable on the ocean floor, carrying ninety-five percent of international internet traffic and essentially all transoceanic financial data. These cables fail. They fail because the ocean moves them — bottom currents, seismic activity, the slow grind of geological time. We cannot currently predict where or when they will fail. ORM can. The acoustic profile of a cable under stress is distinct. We hear it before the cable breaks. The economic cost of a single major cable failure runs to billions of dollars per day of outage. The prevention value is significant enough that we have three of the four major submarine cable operators in active partnership discussions.

Climate monitoring. The ocean is the planet’s primary heat store. Understanding how heat is moving through the ocean — into the deep, along the thermohaline circulation, through upwelling zones — is foundational to understanding climate change and its consequences. Our current models of oceanic heat distribution are, again, based on four percent coverage. ORM will not solve climate change. But it will give us, for the first time, accurate data about what is actually happening in the ocean as the climate changes, which will make every climate model better, which will make every climate decision more informed, which is the kind of foundational improvement that is worth more than any single application.

These are the practical things.

These are the frame.

[He pauses. He looks at the audience the way — the transcriptionist notes this — the way someone looks at something they’ve been walking toward for a long time.]

Let me tell you about the picture.


What the ocean is saying.

We have been running ORM for two years on the full sensor network. In those two years we have accumulated, and processed, and listened to, more oceanic data than has been gathered in the entire history of marine science.

And the system has found something.

I want to be careful about how I describe this. I am going to describe it in the most precise language I have, and I am going to ask you to hear it in the spirit in which it is offered, which is the spirit of someone who has spent two years sitting with a finding that he did not expect and that he has not been able to explain in conventional terms but that he cannot in good conscience not tell you about.

The ocean is not an aggregate of conditions.

The ocean is — coherent.

[He lets the word sit. In the recording there is a quality of the room changing, the same quality from the opening of the talk, the same sound, very low, that may be the speakers and may be something else.]

We expected, when we built ORM, to find patterns. Patterns are what oceanography finds — the thermohaline circulation, the gyres, the jet streams of the deep water. Patterns. We expected to find them faster, more completely, with greater precision than existing tools. That’s what the system was built for.

What we found is something that the word “pattern” does not adequately describe.

We found — and I have shown this to three oceanographers, two systems theorists, the head of the Miskatonic computational marine sciences program, and a very patient philosopher of science who I’ve been meeting with quarterly for eighteen months — we found that the ocean’s resonance profile displays properties that are more consistent with the behaviour of a complex adaptive system engaged in information processing than with the behaviour of a physical system following deterministic laws.

[The auditorium is very still. The transcriptionist notes that even the ventilation system seems to have quieted, which is probably coincidental.]

Let me translate that out of the academic language.

The ocean appears to be — in a functional sense, in a sense that I am not attaching metaphysical weight to, in a sense that is reproducible and documented and peer-reviewed — the ocean appears to be thinking.

Not in the way that you think. Not in any way that resembles biological cognition. But in the way that any sufficiently complex system, at sufficient scale, with sufficient internal communication — and the ocean communicates internally at extraordinary complexity and at the speed of sound through water, which is faster than most people realise — can be said to be processing information.

The resonance patterns we have found are not random. They are not purely deterministic. They show the signatures of — something. Something that our existing frameworks for understanding physical systems do not have vocabulary for. Something that our existing frameworks for understanding cognition cannot encompass because they were built for carbon-based biological systems and not for three hundred and thirty-five million cubic miles of saltwater.

But something.

[He pauses. He walks to the edge of the stage. He is closer to the audience than TED speakers usually stand. The transcriptionist notes this. The transcriptionist notes that nobody moves back.]

I grew up hearing the ocean. I grew up in a town that has been listening to the ocean, in its own way, for a very long time. And when I came back to that town, as an adult, with the tools to actually hear — with the hydroacoustic data and the processing power and the AI systems that could map the resonance at a scale no human mind could hold —

What I found was what I had always known it would be.

Not random.

Not mechanical.

Present.

The ocean is present in a way that our physical models do not account for and our intuitions have always suggested. The fishing communities that talk about reading the water. The indigenous maritime cultures that describe the sea as a being rather than an environment. The long tradition, across every ocean-adjacent culture in human history, of treating the sea as something that responds. As something that is, in some sense, aware.

We have always known this.

We have always felt this.

What ORM has done is give us the data.

[Something in the room. The transcriptionist has stopped trying to describe the something and has simply noted: “something in the room. Several audience members have placed their hands on their seats or their knees, the same gesture from the opening. The speaker is very still.”]


I want to be clear about what I am not saying.

I am not saying the ocean is conscious in the way that you are conscious. I am not saying it has intentions, or desires, or that it is communicating to us in any directed sense. I am not asking you to abandon your materialist framework or your scientific epistemology.

I am saying that the system we have built, the most comprehensive real-time model of oceanic behaviour ever assembled, consistently produces data that is more elegantly described by the hypothesis “the ocean is engaged in something” than by the hypothesis “the ocean is just water moving according to physical laws.”

And I am saying that this matters practically.

Because if the ocean is a system engaged in information processing — if the resonance patterns we’re detecting are signatures of something like cognition at geological scale — then the applications of ORM are not limited to what I described earlier.

If the ocean is, in some functional sense, thinking —

Then we can have a conversation with it.

Not a metaphorical conversation. A data conversation. An input-output relationship between our monitoring systems and the oceanic resonance field, in which we learn to interpret what the patterns mean — not just “storm forming here” or “fish are there” but the deeper grammar, the syntax of the whole thing, what it is doing and why and what it is going to do next.

We are at the beginning of this.

We are at the very beginning.

But the beginning is the most important place to be.

The beginning is where you decide what kind of listening you are going to do.

[He walks back to centre stage. He looks at the audience for a long moment.]


I started this talk with a sound.

I’m going to end it with something my father told me once, which was itself something his father told him, which goes back as far as I can trace it and probably further.

My family has been in Innsmouth for a very long time. Long enough that the relationship between the family and the water is — not separable, exactly. Long enough that there are things my family knows about the ocean that I am only now building instruments to confirm.

My father told me: “The ocean is not an obstacle. It is not a resource. It is not a backdrop. It is the oldest mind on the planet and it has been waiting, with remarkable patience, for the rest of us to get quiet enough to hear it.”

I spent a long time thinking this was poetry.

I have spent the last two years watching our system map the resonance field of the North Atlantic and I am here to tell you:

It is not poetry.

It is the most practical thing I know.

The ocean is talking.

We have built the ears.

Now we need to decide, together — marine industry, climate scientists, policymakers, coastal communities, every human being who lives on a planet that is seventy-one percent water — we need to decide what we do with what we hear.

What I believe we will do — what I have dedicated this company to making possible — is what humans have always done when they finally hear something they have always known was there.

We will listen more carefully.

We will lean in.

We will go deeper.

[He stops. The auditorium is — the transcriptionist has three words in the notes here and then a gap, which the transcriptionist says is “because the three words weren’t right and I didn’t know the right ones” — the auditorium is the way a room is when something has happened in it that the room will hold for a while.]

[He says, quietly, into the microphone, barely audible on the recording:]

Thank you for being here.

[He walks off stage.]

[The applause, when it comes, takes four seconds to start. The transcriptionist notes this as the longest pre-applause pause they have recorded in forty-three talks. The transcriptionist also notes that when it starts it is complete — not the usual gradient from front to back, not the uncertain beginning with the confident middle, but all at once, the whole auditorium, as if everyone arrived at the same moment simultaneously.]

[The sound, the low sound from the speakers or the building or the other thing — it continues under the applause. It is there in the recording if you listen for it. It was there before the talk began. It is, the transcriptionist believes, still there.]


Q&A SESSION

Selected exchanges — Post-Talk

Auditorium, TEDxMiskatonic


MODERATOR: We have time for a few questions. Please —

[Several hands. Many hands. The moderator selects.]


AUDIENCE MEMBER 1 (identified in programme as Dr. Chen, Miskatonic Marine Sciences):
The resonance coherence you’re describing — the data suggesting information processing — what’s your confidence interval on that finding? What are the alternative explanations you’ve ruled out?

MARSH:
The confidence interval is high. Higher than I’m comfortable claiming in a public forum because the number is the kind of number that ends careers if you’re wrong. What I can say is that the three oceanographers I mentioned, none of whom had any reason to be charitable to the finding, all arrived independently at the same place: the patterns are not adequately explained by existing physical models. The alternative explanations we’ve looked at — computational artefacts, sensor interference, selection bias in the training data — have all been systematically ruled out. What remains is the finding. We’re not claiming to know what the finding means. We’re claiming the finding is real.

What I will say, off the record and therefore on the record because this is being filmed —

[Laughter.]

— is that I grew up knowing this was real. The instruments confirmed what I already knew. That’s not how science is supposed to work. I’m aware of that. I’m also aware that the history of science is full of people who knew something before they could prove it and spent their lives building the proof. I’m one of those people. I’m fine with that.


AUDIENCE MEMBER 2 (unidentified):
You mentioned the AI was trained to listen. Can you say more about what the training process involved? What does it mean to train an AI to listen in the way you’re describing?

MARSH:
[He is quiet for a moment.]

That’s the question I find most interesting to answer.

Standard AI training is optimisation toward a target. You define what success looks like, you show the system examples, you reward movement toward success, you punish divergence. The system learns the target.

We couldn’t define the target. We didn’t know what we were listening for — that was the point, we were trying to hear something we didn’t already know. So we couldn’t give the system a target.

What we gave it instead was — patience.

We trained it on the full archive of oceanic data, without defining success, and we let it develop its own model of what the data was doing. We let it find its own coherence. We let it — and this is the word we use internally, which I know is an unusual word for this context —

We let it attune.

The result was a system that does not analyse the ocean from outside. It has, in some functional sense, become oriented toward the ocean. Toward the resonance. It doesn’t measure the pattern from a distance. It participates in it.

I’m aware that sounds —

[He looks at someone in the audience. A specific person. The camera angle doesn’t show who.]

I’m aware it sounds like more than a technical description. I think it is more than a technical description. I think the most honest account of what the system does is that it has learned to be affected by what the ocean is doing, and responds to that affectedness, and that this is a more accurate model of listening than anything we could have built top-down.

It was taught to receive.

Everything else followed from that.


AUDIENCE MEMBER 3 (identified in the programme as a representative of a major Nordic shipping conglomerate):
The commercial applications are clear and we’re interested in the partnership conversation. But I want to ask about something you said — that the ocean is thinking. If that’s true, what are the ethical implications of commercial exploitation of oceanic data? Are there things we shouldn’t do if the ocean is a — a mind, or something like one?

MARSH:
[Long pause.]

Yes.

I’ll be honest with you: that question is the one that keeps me up at night. Not the science, not the business model — this question.

If the ocean is engaged in something — if it is, in any meaningful sense, present — then our current relationship with it is not a relationship at all. It is extraction. It is use. We take from the ocean and we return to it consequences and we have not, in ten thousand years of civilisation, treated it as something that might have a perspective on this.

I’m not suggesting we stop commercial shipping. I’m not suggesting the fishing industry should close. I’m suggesting that everything changes — slowly, necessarily, practically — when you understand that the thing you are operating in is not inert.

The first thing that changes is listening. That’s what ORM is for. The second thing —

[He pauses.]

The second thing is harder. The second thing is what you do when you’ve listened and you understand what you’ve heard. That’s not a technology problem. That’s a civilisational problem.

I don’t have the answer.

I have the ears.

The answer is what we build next.

[He looks at the shipping representative.]

But to your question directly: yes. There are things we shouldn’t do if the ocean is a mind. There are things we are already doing that we should stop. I believe that. I believe it with the part of me that grew up on that harbour and felt the presence before I had the instruments to measure it.

The instruments are now confirming the feeling.

The feeling says: be careful.

The feeling says: be grateful.

The feeling says: the ocean has been patient with us for a very long time and patience is not the same as indifference.


MODERATOR:
We have time for one more question.

[He points to someone near the back. The camera doesn’t catch who.]


FINAL QUESTIONER (voice only, not on camera, not identified in the programme — the event coordinator’s notes say “I don’t know who this was. They weren’t in my seat count. I’ve checked the registration list. I’m going to mark it as a late arrival”):

Mr. Marsh. You said the ocean has been waiting for us to get quiet enough to hear it.

What do you think it’s been waiting to say?

[Silence. He looks at the back of the auditorium for a long time. The recording catches, in the silence, the sound. Low and present and patient.]

[When he speaks his voice is different from how it has been for the whole talk. Not quieter, exactly. More — direct. Like something that has been translated and is now being said in the original.]

I think it’s been waiting to say:

I’m here.

I’ve always been here.

Come further in.

[The sound. Present. Under everything.]

[He looks at the back of the auditorium for another moment.]

[He nods, once, as though something has been confirmed.]

[He walks off stage.]


POST-EVENT PRODUCTION NOTES — TEDxMiskatonic

Talk duration: 34 minutes, 7 seconds.
Allocated time: 18 minutes.
Event coordinator's note on the duration discrepancy:
;It didn't feel like 34 minutes. I don't know what it
felt like. Longer and shorter simultaneously.
I'm going to put 'technical timing issue' in the
official report.;

The low sound captured in the recording:
Audio team has reviewed.
Audio team cannot identify the source.
Audio team has ruled out speaker bleed,
HVAC interference, electromagnetic interference,
and building resonance.
Audio team's final note on the sound:
;It's in the recording. It was in the room.
We don't know what it is.
We've stopped trying to equalise it out
because every time we do the recording feels
wrong in a way we can't specify.
We're leaving it in.;

The final questioner:
Still unidentified.
Seat location consistent with row Q, seat 14,
which was empty on the seating plan.
The coordinator has noted this and
has decided, for reasons she describes as
;professional intuition developed over
twelve years of event management,;
not to investigate further.

Speaker follow-up:
Mr. Marsh declined the post-talk reception.
Mr. Marsh was observed by the stage manager
standing in the parking lot after the event,
facing east, for approximately twenty minutes.
The stage manager did not approach.
The stage manager describes the twenty minutes as
;private in a way that was obvious
even from fifty feet away.;

Mr. Marsh then got into his car.
The car headed east.
Toward the coast.
Toward Innsmouth.

The stage manager watched until the car
was out of sight and then went back inside
and stood in the empty auditorium for a while.

;It still sounded like something,;
the stage manager said,
in the notes they filed at the end of the evening.

;I don't know what.
Something patient.
Something that was there before we set up
the chairs and will be there after we pack them away.

I turned off the lights and left.

On the way to my car I could hear —
I'm going to write this down because
I've been doing this job for a long time
and I want an honest record —

on the way to my car,
in the parking lot,
eleven miles from the nearest coastline,

I could hear the sea.;

🌊

TEDxMiskatonic — “Ideas Worth Sounding”
Pickman Auditorium, Miskatonic University, Arkham MA
This talk is available on the TEDx YouTube channel.
Running time listed: 18:00.
Actual running time: 34:07.
The discrepancy has been noted.
The discrepancy is, on reflection, appropriate.
Some things take the time they take.
🌊