Felix arrived at Confluence Logistics at 6:47 AM—exactly twelve hours after shutting it down. The morning light hit the old brick at an angle that softened everything, turned the converted candy factory into something almost peaceful. Almost. The parking lot was too empty. The loading docks stood silent. A single crow sat on the fence by the dumpsters, watching Felix park.
He'd barely slept. After leaving the Crawford Grill, he'd driven around the ‘Burgh for an hour, letting Viktor's words settle like sediment finding the bottom of a glass. The music survives because it belongs to everyone. By midnight, he'd filled half a legal pad with diagrams and questions. By three AM, he'd thrown most of them away. By five, he'd given up on sleep entirely and watched the sky lighten over the Monongahela from his kitchen window.
The security guard—Maurice, sixty-three, had worked here when it was still the Clark Bar factory—buzzed him through. Maurice's uniform was immaculate even at this hour, the creases sharp, the badge polished. Some habits from forty years of shift work didn't fade.
"You look like hell, Mr. Canis."
"Feel like it too, Maurice. Anyone else in yet?"
"Ms. Martinez never left. And there's a woman from the Teamsters been waiting in the lobby since six. Said you invited her." Maurice's eyebrows lifted slightly. "Six AM, Mr. Canis. That's either very good or very bad."
Maria Santos was indeed waiting—perched on the edge of one of the lobby's vinyl chairs, a paper cup of vending machine coffee going cold in her hand. A legal pad balanced on her knee, half-filled with handwriting too small to read from the doorway. She was dressed with deliberate care: professional but not corporate, the kind of outfit that said I belong in this room without saying I'm trying to impress you. When she saw Felix, she stood in one motion, the legal pad tucked under her arm before he could see what she'd been writing.
"You're early," Felix said.
"So are you." She met his eyes. "Your message said eight AM. It also said things were going to be different. I wanted to see if that was real or just crisis talk."
Felix considered that for a moment. "Come up to the command center. I want to show you something before the others arrive."
Sarah Martinez looked up when they entered, her eyes red-rimmed but sharp. The monitors around her showed system diagnostics, error logs, network maps—the same information Felix had been staring at for thirty hours. But Sarah had been doing something different. She'd been building something.
"I couldn't sleep either," she said by way of greeting. "So I started on an interpretability layer—a way to see what factors actually influenced each decision, and how much."
She pulled up a visualization on the main screen—a flowing diagram of interconnected nodes, some glowing green, others shading toward red. "Neural networks don't think in ways humans can directly read. The actual computations are millions of matrix multiplications across billions of parameters. But we can work backward from the outputs—trace which inputs had the most influence on each decision, calculate attribution scores for every factor."
"SHAP values," Felix said, recognizing the technique. "You built a SHAP attribution system overnight?"
"Adapted one. The bones were already in our codebase from when Emily was doing her interpretability research. I just... pointed it at the routing decisions instead of the test cases." Sarah gestured at the screen. "This isn't the raw model internals—no one could read those. This is a translation layer—it grounds every decision in traceable inputs. It shows which factors the model weighted heavily for each decision, computed after the fact but based on the actual model behavior, not a separate explanation generator."
Maria stepped closer to the screen, her coffee forgotten on the edge of a desk. "I've never seen anything like this."
"That's by design," Sarah said. "Most AI explanation systems use a separate model—a 'rationalizer' that generates plausible-sounding justifications after the fact. The rationalizer doesn't know what the main model actually computed. It just produces explanations that sound reasonable to humans. Our old dashboard worked the same way—it told you the decision was safe, but that explanation wasn't derived from the actual decision process."
She gestured at the visualization. "This is different. SHAP analysis works backward from the actual model outputs. It's computationally expensive because we have to re-run the model multiple times with different inputs masked out. But the attribution scores reflect what genuinely influenced the decision, not what a separate system guessed might have influenced it."
Felix watched Maria's face as she studied the diagram. He could see the moment when she found it—a sequence of routing decisions over a twelve-hour period, their attribution patterns shifting from green to amber to red.
"That's the night Jim Patterson got assigned the fourteen-hour shift," Maria said quietly. "I can see it happening."
"Each dot is a separate routing decision," Sarah explained. "The colors show how much weight the model gave to worker welfare factors—things like fatigue, family obligations, hours already worked. Green means those factors strongly influenced the decision. Red means they barely registered."
She pointed to the timeline. "Jim's profile data—three kids, Little League coach on Thursdays—that information was always available as an input feature. But watch what the attribution scores show over these twelve hours."
She scrubbed the timeline backward, and they watched the trend emerge. Decision by decision, the attribution scores for worker welfare factors dropped—from 6.2 to 4.1 to 2.3 to 0.8. By the time the fourteen-hour shift was assigned, the model was treating Jim's family situation as statistical noise.
"This is what the attack did," Felix said. "Our system uses continuous learning—it retrains periodically on new data to stay current. The attackers poisoned our upstream data feeds with patterns that, over thousands of training updates, gradually shifted what the model optimized for. Not by editing the weights directly—that would require access we thought was secure. By feeding it corrupted examples that made efficiency look like the only thing that mattered."
"Catastrophic forgetting," Sarah added. "Emily explained it yesterday—like CTE in football players. Each individual data point was subtle, but cumulatively, they caused the model to lose its learned priorities around worker welfare. It's a known vulnerability in continual learning systems. We just didn't think anyone would weaponize it against us."
Maria turned to face him. "And you never saw it happening."
"No. We had dashboards. We had metrics. We had human-in-the-loop protocols." Felix heard Viktor's voice in his head: You put humans in the loop, but you didn't give them the loop. "But we never showed anyone this. The actual reasoning. The real weights. The system decided what we saw, and we saw what made us comfortable."
"So what changes?" Maria's voice was harder now. "You show us a pretty visualization and we're supposed to trust you?"
"No." Felix took a breath. "That's the first thing I learned last night. Transparency isn't about trust. It's about verification. You shouldn't trust us. You should be able to see for yourself what the system is doing, in real-time, without asking permission."
Maria was quiet for a long moment. Then: "Show me how that would work."
By 7:45, the conference room had begun to fill. Tommy Rodriguez arrived first, his weathered face tight with the wariness of a man who'd seen too many management promises evaporate. Behind him came Jake Morrison, who ran logistics for three mid-sized shipping companies; his phone hadn't stopped buzzing since he'd walked in. Dr. Emily Chen appeared via video link from Carnegie Mellon, her face framed by whiteboards covered in equations Felix couldn't read.
A knock at the side door interrupted—Rosa Mendez, wearing a Confluence Logistics polo and the slightly harried expression of someone who'd started work at 4 AM. She was pushing fifty, with gray threading through dark hair pulled back in a practical bun, and she carried a tablet like a clipboard from another era.
"Mr. Canis? Sarah said you'd want to see this."
Felix waved her in. "What do you have?"
"The training queue from overnight." Rosa pulled up her screen—a wall of routing decisions, each one color-coded. Green for confirmed correct. Yellow for uncertain. Red for error. "My team's been reviewing the batch since four. We caught seventeen decisions the system got wrong, forty-three it got right, and two hundred six we're still arguing about."
"Arguing?"
"That's the job." Rosa scrolled to a specific entry. "This one—system sent a truck through downtown during rush hour. Shorter distance, lower fuel cost. The math checks out."
"What's wrong with it?"
"Eddie Morales has been driving that corridor for thirty years. He knows that intersection floods when it rains." Rosa tapped the screen. "It rained last night. Water's still pooling. The system learned distance and fuel. It hasn't learned the Pittsburgh storm drains."
"How do we teach it?"
Rosa's mouth curved—not quite a smile, but something close. "My trainee asked the same question. I told him: how long until you learn everything I know? The system gets better. It never gets finished. That's why we're here."
Maria stepped closer, studying Rosa's tablet. "Your team does this every morning?"
"Every morning. Six of us in what used to be the break room, going through overnight decisions, flagging the ones that don't smell right." Rosa met Maria's eyes. "We're not engineers. We're dispatchers and drivers who know the routes. Sarah built us tools to see what the AI's thinking. We tell it when that thinking is wrong."
"Human feedback," Felix said slowly, the pieces clicking together. "You're not just monitoring. You're training."
"The AI learns from data. We make sure the data includes human judgment." Rosa tucked her tablet under her arm. "It's not glamorous. Mostly it's tedious. But every time we catch something like Eddie's route, that's a driver who doesn't jackknife because we taught the machine what the machine couldn't figure out alone."
Sarah nodded. "Rosa's team is why our model catches problems the original algorithms missed. They're the feedback loop that makes the whole system work."
"AI Trainers," Maria said, testing the phrase. "That's a job title I've never heard before."
"It didn't exist before." Rosa headed for the door. "Let me know if you want to see the rest of the queue. I've got two hundred six arguments waiting."
Maria watched her go, then turned back to Felix. "So when she marks something as a problem—what happens to that information?"
"It becomes training data." Felix pulled up a different screen, this one showing a flow diagram. "Every correction Rosa's team makes gets fed back into the model. The AI learns from their judgment calls."
"Like teaching a child."
"More like curating a library." Felix traced the data flow with his finger. "The model doesn't understand what it's learning—it's just pattern matching. Rosa flags a case where the algorithm missed context, writes a brief explanation of why, and that labeled example joins thousands of others. Over time, the patterns in those corrections reshape how the model weights different factors."
"That sounds... tedious."
"It's painstaking work. And it's invisible to most people who use AI systems." Felix closed the diagram. "There's a whole industry of data labelers around the world—people who tag images, transcribe audio, mark whether a response is helpful or harmful. They're teaching machines what humans care about, one labeled example at a time."
Maria was quiet for a moment. "And those people—do they see what Rosa sees? The full picture of what they're shaping?"
It was a better question than Felix had expected. "Usually? No. Most labeling work is fragmented. You might tag a thousand images without ever knowing what product they're training. Rosa's different—she sees the whole routing system, understands how her corrections affect real drivers."
"That seems important."
"It's the difference between assembly-line work and craftsmanship." Felix thought of the labeling farms he'd read about—workers in Kenya, the Philippines, Venezuela, paid cents per task to clean up data for models worth billions. "We made a choice early on to keep the feedback loop local. People who understand Pittsburgh logistics training a system that serves Pittsburgh logistics."
After she left, Felix stood at the window for a moment. AI Trainers. A job that hadn't existed until someone built a system that needed human judgment to improve. Not replaced by automation—created by it.
And then the union representatives arrived—not just Maria, but a delegation. Frank Kowalski from the Teamsters local, his handshake like a vise and his eyes already measuring the room. Denise Williams from the warehouse workers, who'd started at this building when it was still making candy bars—her handshake gentler but her gaze just as assessing. A young organizer named Marcus Chen—no relation to Emily—who'd been coordinating drivers across six states and looked like he hadn't slept much either.
Felix had expected skepticism. What he got was something closer to controlled hostility.
"Let me be clear about something," Frank Kowalski said, before Felix could begin. "We've been in rooms like this before. Management has a crisis, management needs our help, management promises things will be different. Then the crisis passes and suddenly the promises don't apply anymore."
"I understand—"
"Do you?" Denise Williams leaned forward. "Because three months ago, your system assigned Danny Kowalski—Frank's nephew—a route that nearly killed him. Black ice on I-80. The algorithm said conditions were acceptable. Danny's truck is still in the shop, and Danny walks with a limp now."
Danny's story. Jim Patterson's story. His own father's story. All the same story, really. Systems that optimized for efficiency while optimizing away humanity.
"You're right," he said. "You've heard promises before. So I'm not going to make any."
Keyboards stopped. Chairs stopped creaking. Even the building's radiator seemed to hold its breath.
"Instead, I'm going to show you something. And then I'm going to ask you a question."
He nodded to Sarah, who pulled up the decision trace visualization on the main screen—the same one Maria had seen earlier. "This is what our AI system actually did when it assigned Danny that route. Not the summary. Not the dashboard. The actual reasoning."
For the next twenty minutes, Sarah walked them through the forensic analysis. The attack had three layers. First, prompt injections hidden in upstream data feeds—weather services, traffic APIs, scheduling systems—that corrupted individual routing decisions in subtle ways. Second, those corrupted decisions fed back into the continual learning pipeline, gradually shifting the model's learned priorities. Third, the dashboard's explanation system kept generating reassuring summaries even as the underlying model degraded.
"Think of it like food poisoning that takes months to show symptoms," Emily Chen added from the video link. "Each individual injection was too small to trigger our anomaly detection. But over time, they accumulated. The model didn't suddenly flip from safe to dangerous—it drifted, so gradually that our monitoring systems interpreted the change as normal adaptation."
Felix watched the room as Sarah talked. Frank Kowalski's jaw was tight, his eyes fixed on the screen as his nephew's route materialized in glowing lines—the decision points, the weather assessments, the moment when the system decided black ice wasn't a sufficient reason to delay. Denise Williams had stopped taking notes.
But it was Maria who surprised him. She'd moved closer to the screen, her hand pressed flat against the conference table like she needed something solid to hold onto.
"My brother drove that same corridor last January," she said quietly, almost to herself. "Route 80 through Clarion County. He called me at two in the morning, said the app was telling him conditions were fine but he could barely see the road. I told him to pull over and wait it out." She turned to look at Felix. "He did. But if he'd trusted the system instead of his gut, he'd have been on the same stretch of ice that got Danny."
"The system had access to the weather data," Sarah said. "It had the accident reports from three hours earlier. All of that was in the input. But when we run SHAP analysis on Danny's routing decision, look at the attribution scores: delivery timing scored 9.6 out of 10 for influence. Road conditions scored 0.8. Weather warnings scored 0.7." She let that sink in. "The model had the safety information. It just learned—through months of poisoned training data—to treat that information as nearly irrelevant compared to hitting the delivery window."
Maria's expression had changed—not softened exactly, but focused in a new way. "You're telling me my brother could have seen this? The actual math? The real reasoning?"
"That's exactly what I'm telling you. That's what transparency means. Not trust—verification."
"Someone did this deliberately," Felix said into the silence that followed. "Someone with deep knowledge of our architecture, our processes, our culture. They attacked the trust that made our system work."
"Why?" Marcus Chen asked.
"Because democratic AI governance threatens them. Because if workers have real power over the algorithms that affect their lives, that changes everything. The same attack hit networks in Detroit, Cleveland, Milwaukee—everywhere that people are trying to build something different."
Frank Kowalski's expression hadn't softened, but something in his posture had shifted. "So what's the question?"
Felix took a breath. "The question is: do you want to help us rebuild it right this time?"
The argument that followed was exactly what Felix had hoped for—messy, contentious, and real.
"You're talking about giving us access to the actual algorithms," Denise said. "Real-time. Not filtered through some dashboard you control."
"Yes."
"And you expect us to understand it?" Tommy Rodriguez shook his head. "I drove trucks for twenty-five years. I don't know anything about neural networks or attention mechanisms or whatever you call it."
"You don't need to." Felix walked to the screen and pointed at the cluster of nodes representing Danny's route. "Look at this. What do you see?"
Tommy squinted at it. "A bunch of lines and dots. Some red, some green."
"What does the red mean to you?"
"I don't know. Something bad?"
"Exactly. You don't need to understand how the system made a decision to know that something's wrong." Felix traced the connections with his finger. "These numbers are attribution scores—they show how much each factor influenced this specific decision, scaled from zero to ten for readability. Weather conditions: 0.3. Accident history on that corridor: 0.2. On-time delivery pressure: 9.7."
"Wait," Tommy said. "Those aren't the actual... weights inside the AI?"
"The actual weights are incomprehensible—billions of numbers that don't map to human concepts," Sarah said. "These are computed from those weights. We run the decision through the model, then calculate: if we'd removed the weather data entirely, how much would the output have changed? That gives us the attribution score. It's not perfect—there's some uncertainty in the calculation—but it's grounded in what the model actually did, not what we wish it did."
Felix turned back to the room. "You don't need a PhD to know those numbers are backwards. Weather should matter more than 0.3 when there's black ice on the road."
Sarah jumped in. "Think of it like a car's dashboard. You don't need to understand internal combustion to know that when the engine light comes on, something's wrong. Our job is to build you a dashboard that shows you the real readings—not a fake one that always says everything's fine."
"And when we see something wrong?" Maria asked. "What then?"
"You flag it. You challenge it. Depending on the severity, the system either pauses for review or logs it for pattern analysis."
"Depending on the severity?" Marcus leaned forward. "Who decides what's severe?"
"That's part of what we negotiate together." Felix pulled up a diagram. "We can't halt every decision for human review—there are thousands of routing choices per hour. But we can set thresholds. If the safety attribution score drops below a certain level, or if the model's confidence is low, or if the decision matches patterns we've flagged as concerning—those get queued for human review before execution. Everything else gets logged, and workers can review batches after the fact to catch patterns we missed."
"So some dangerous decisions still go through," Frank said, crossing his arms.
"Some might. No system catches everything in real-time—that's true of human dispatchers too. But here's the difference: when a pattern emerges—when three drivers flag the same corridor as dangerous, or when accident reports spike in a region—the thresholds automatically tighten. The system learns from challenges, not just from outcomes. And the learning is visible. You can see exactly when and why a threshold changed."
Frank's arms stayed crossed. "By who? Some engineer in an office who's never driven a truck in a snowstorm?"
"By consensus." Felix felt Viktor's words forming in his mouth before he consciously chose them. "No single person—no engineer, no manager, no algorithm—gets to decide what 'acceptable' means. That emerges from agreement. From people who have skin in the game actually talking to each other."
"Like a union vote," Denise said slowly.
"Exactly like a union vote. But continuous. Built into the system. Every routing decision, every safety parameter, every weight adjustment—visible, challengeable, subject to collective agreement."
The room was quiet. Felix could feel the skepticism still thick in the air, but it had changed texture. It was no longer the skepticism of people who'd stopped listening. It was the skepticism of people who were afraid to hope.
Marcus Chen—the young organizer—leaned forward. "Show us."
"What?"
"Show us how it would work. Right now. Pick a decision and let us challenge it."
Felix glanced at Sarah, who was already pulling up data. She found a routing decision from two days ago—before the attack, during normal operations. A driver named Keisha Williams had been assigned a 300-mile route that included a stretch of mountain highway in forecasted rain.
"This decision was made by our system Tuesday morning," Sarah said. "Keisha completed the route without incident. But look at the attribution breakdown."
The visualization showed the familiar pattern: safety factors—road conditions, weather forecast, driver fatigue estimates—clustered at the low end of the influence scale. Efficiency factors—delivery deadline, fuel optimization, fleet utilization—dominated the decision. Not as extreme as Danny's route, but the same underlying tendency.
"The model didn't ignore safety," Sarah explained. "It considered the factors. But when it computed the final routing decision, those factors barely moved the needle. The attribution scores tell you what the model actually cared about, versus what we told ourselves it cared about."
"I don't like that," Tommy said immediately, pointing at the screen. "That mountain stretch—I've driven it. In rain, at night, with a full load? That's dangerous even in good conditions."
"What would you change?" Felix asked.
"The route should've gone south through the valley. Adds forty minutes, but the roads are better."
"Sarah, can you show what that alternative would have looked like?"
Sarah's fingers flew across the keyboard. A moment later, a second route appeared alongside the first—Tommy's suggestion, run through the same attribution analysis.
"I'm computing what the model would have scored for the valley route," she explained as the numbers populated. "Delivery time: 38 minutes longer. Fuel cost: $47 higher. But look at the safety attribution—road condition risk drops from 7.2 to 1.8. Weather exposure drops from 6.1 to 2.3. Overall safety margin improvement: 340%."
"The model never even evaluated this route," Tommy said slowly. "It saw the extra time and stopped looking."
"Exactly. The optimization function found a local minimum—fastest route that met the basic constraints—and stopped. It wasn't designed to explore alternatives that traded efficiency for safety." Sarah highlighted the comparison. "We can change that. Instead of optimizing for a single objective, we can require the model to surface alternatives that score above thresholds on multiple objectives. Let humans see the tradeoffs instead of hiding them."
"Why didn't anyone build it that way in the first place?" Frank asked.
"Because multi-objective optimization is computationally expensive," Emily Chen's voice came through the video link. "It takes longer. Costs more. And for years, the industry assumption was that efficiency was safety—that faster deliveries meant less time on the road, which meant fewer accidents. The data never supported that, but the assumption was baked into the architecture."
"Until now," Felix said. "Tommy, if you'd been able to see this visualization before Keisha started her route—if you'd been able to challenge the decision and propose an alternative—would you have?"
"Damn right I would have."
"That's transparency. Not understanding the code. Challenging the outcomes. And having those challenges actually matter."
Felix could see it in the way people were leaning forward instead of back, in the way Frank Kowalski's arms had uncrossed, in the way Maria Santos was nodding slowly—not in agreement, not yet, but in recognition. This was something different.
But it was Denise Williams who asked the question that mattered: "What's to stop you from taking this away? Once the crisis passes, once the press moves on—what's to stop you from deciding transparency is too expensive, too slow, too inconvenient?"
"Nothing," Felix said. "If it's just us making that choice."
"Then what's the alternative?"
"You don't just review the system. You own part of it." Felix took a breath. This was the part he'd been building toward—the part that scared him, because it meant giving up control. "Worker representatives on the governance board. Real authority over weight parameters that affect safety and working conditions. Not advisory. Not consultative. Voting."
"You'd give us veto power over your algorithm?" Frank sounded like he didn't quite believe it.
"Over the parts that affect your lives? Yes. Because here's what I finally understand: if I can take it away, it's not real. If one person—one engineer, one executive, one acquisition—can change the rules, then the rules don't actually protect anyone. The only way to make this stick is to make it ours. Not mine. Not yours. Ours."
The building's old radiator ticked. Traffic hummed on Liberty Avenue. Outside, Pittsburgh was waking up—a city that had reinvented itself twice over, that had survived the death of steel by learning to become something new.
"I need to make a call," Frank said finally. He stood, pulled out his phone, and walked toward the window. Felix watched him go, then turned back to the room.
"While he's doing that," Maria said, "tell me about the attack. Who did this? Who benefits from democratic AI governance failing?"
Felix hesitated. He'd been so focused on the internal rebuilding that he'd almost forgotten the external threat. "We don't know for certain. But Sarah found something this morning."
Sarah pulled up a new display—a timeline showing attack signatures across the Midwest. "The injection patterns aren't random. We found the same malformed data structures in weather feeds from three different providers, all starting within a 72-hour window. The same adversarial examples—inputs specifically crafted to cause misclassification—appeared in traffic APIs across six states. Someone had to compromise those upstream services, then coordinate the timing."
"That's not script kiddie stuff," Emily said through the video link. "Crafting effective adversarial examples requires deep knowledge of the target model architecture. And getting access to multiple upstream data providers? That takes resources. Infrastructure. Money."
"MegaFreight?" Marcus asked.
"Possibly. Their press conference is happening"—Felix checked his watch—"right now, actually. They're positioning themselves as the safe alternative to 'experimental governance.' But they're not sophisticated enough to have designed this attack. Someone's backing them. Coaching them."
As if on cue, Tommy's phone buzzed. He glanced at it, and his expression darkened. "It's on the news. MegaFreight's CEO is calling for federal investigation into 'reckless AI experimentation.' He's got a senator with him."
The coordinated timing. The political leverage. The money behind the message. Whoever was doing this had planned it carefully. The attack wasn't just about bringing down a network; it was about proving that democratic AI couldn't work. That workers couldn't be trusted with power. That the only safe path was control—corporate control, algorithmic control, control that kept the music in the hands of a single conductor.
But they're building a symphony, Viktor had said. One conductor. One plan. One point of failure.
Frank Kowalski walked back from the window, his phone call finished. His face was unreadable.
"I just talked to national," he said. "Explained what you showed us. What you're proposing."
"And?"
"They're skeptical. They've seen companies make promises during crises before. They want proof."
"What kind of proof?"
Frank looked at him steadily. "They want to see it work. One week. Full transparency. Worker input on every routing decision that affects safety. If we see the system actually changing based on what our people flag—if we see challenges getting resolved instead of ignored—then we'll talk about governance seats."
"One week isn't enough time to prove—"
"It's what we've got." Frank's voice was hard. "MegaFreight is offering drivers guaranteed routes, no algorithmic surprises, sign-on bonuses. By next week, half our people could walk if they don't see something different here."
Felix looked around the room—at Maria, who'd shared her brother's story; at Tommy, who'd just shown them what real driver input could look like; at Denise and Marcus and the others who'd shown up despite every reason not to trust him.
"Then we'll show you in one week," he said. "Sarah, can we have the challenge interface running by tomorrow?"
Sarah hesitated—the first time Felix had seen her do that. "The attribution layer is already working—that's what I showed you this morning. The visualization interface needs polish, but it's functional. The hard part is the threshold system—deciding which decisions get flagged automatically versus logged for batch review." She thought for a moment. "If Emily can help with the anomaly detection models, and if we accept that the first version will be rough... forty-eight hours. But Felix, there's a cost. Running attribution analysis on every decision adds latency—maybe three to five seconds per routing choice using the fast approximation methods. The full SHAP calculations would take minutes per decision, so we're using sampling-based shortcuts that trade some precision for speed. During peak hours, even three seconds compounds. We might need to run some decisions in parallel with human review catching up, rather than blocking until review completes."
"Document the tradeoffs," Felix said. "We'll make the latency numbers visible too. Workers should know what speed costs and what safety costs." Felix turned back to the union delegation. "One week. Full transparency. Every decision visible. Every challenge heard. And if we fail—if you don't see something genuinely different—I'll shut the whole thing down myself."
Frank extended his hand. Felix shook it.
"One week," Frank said. "Don't waste it."
The meeting broke up around ten-thirty. Felix stood at the window, watching the union reps cross the parking lot toward their cars. Maria Santos lingered, talking quietly with Sarah about something technical—probably the visualization interface, how to make it accessible to people without engineering degrees.
Tommy Rodriguez appeared beside him. "That was a hell of a gamble."
"Was it the wrong one?"
Tommy considered the question. "My father worked steel. Republic Steel, down in Youngstown, before Black Monday. Forty-one thousand jobs gone in one day. He used to say the problem wasn't that management didn't listen—it was that they listened just enough to know what to say, without ever actually hearing."
"And today?"
"Today you heard something." Tommy paused. "Whether you keep hearing it—that's the gamble."
He clapped Felix on the shoulder and walked out. Felix stayed at the window, watching the morning light catch the old brick of the building—the same bricks that had housed a machine shop, a candy factory, and now something that might, if they got it right, become a new model for how humans and AI could work together.
His phone buzzed. A text from a number he didn't recognize:
Interesting meeting. Viktor says you're asking better questions.
Felix stared at the message. Viktor hadn't been in the room. Hadn't been anywhere near Pittsburgh, as far as Felix knew. But somehow, he wasn't surprised. Viktor had a way of knowing things, of being connected to networks that didn't show up on any organizational chart.
He typed back: Who is this?
The response came immediately:
A friend of friends. The people attacking you have resources you can't imagine. But they have a weakness too. Ask Viktor about the dolphins.
Felix read the message twice. Then he deleted it.
Whatever was coming, he had one week to prove that the music could survive. One week to show that democratic AI governance wasn't just a beautiful idea—it was a practical reality.
He turned from the window and went to find Sarah. There was work to do.

