A senior software engineering technical leader from Cisco’s Security and Trust Organization who helps shape industry-wide cybersecurity standards through OASIS committees spent two weeks evaluating hackathon projects — and found that the gap between “it works” and “it’s safe to deploy” is where civic technology faces its most consequential decisions.
Software built in 72 hours rarely comes with a security model. It comes with features, demonstrations, polished READMEs, and deployment links — the artifacts of a hackathon optimized for judging. The authentication is minimal. The input validation is optimistic. The API keys are hardcoded. These are acceptable compromises in a competition context. They become unacceptable the moment someone deploys the software to serve a real population — and in civic tech, that population is almost always vulnerable by definition.
Sergii Demianchuk has spent over sixteen years building the systems and frameworks that prevent exactly this class of failure. As a Senior Software Engineering Technical Leader in Cisco’s Security and Trust Organization, he operates at the intersection of application security, vulnerability management, and industry-wide standards governance. His thirteen years in application security predate his current role at Cisco and span a career trajectory from application architecture at SoftServe to engineering leadership at one of the world’s largest network infrastructure companies. He is an active member of the OASIS OpenEoX and CSAF Technical Committees — bodies that define how the global technology industry communicates about product lifecycle, end-of-support timelines, and cybersecurity advisory formats.
When Demianchuk evaluated projects at sudo make world 2026 — a 72-hour hackathon organized by Hackathon Raptors challenging teams to build open-source tools for social good — he brought a standards-oriented perspective to submissions that ranged from refugee assistance platforms to civic data visualization tools to children’s mental health applications. What he found was not a lack of ambition or engineering talent, but a consistent underestimation of the security foundations that separate a hackathon project from a deployable system.
“The question I ask about every system I evaluate is not ‘does it work?’ but ‘what happens when it fails?'” Demianchuk says. “In civic tech, the answer to that question involves real people in real danger. A failed authentication flow in an enterprise app means a support ticket. A failed authentication flow in a refugee assistance platform means someone’s location is exposed to the authority they’re fleeing.”
The API Key in the Browser Bundle
Refugee Ready by Team Dua was the strongest submission in Demianchuk’s evaluation batch, finishing eleventh overall at 3.781/5.00. The platform provides a comprehensive “first 72 hours” toolkit for displaced populations — multilingual support spanning ten languages with proper right-to-left rendering, OCR-powered document translation, offline-first localStorage for connectivity-challenged environments, and real-time mapping of WiFi spots, shelters, food resources, and legal aid.
“The product design shows genuine empathy for the user population,” Demianchuk observes. “Supporting Tigrinya in Ge’ez script, implementing halal food filters, including women-only shelter options — these are design decisions that come from understanding the specific needs of refugees, not from reading a requirements document. The offline-first approach acknowledges a reality that many enterprise developers never encounter: your users may not have reliable internet, and your application needs to work anyway.”
But the platform contained a security vulnerability that Demianchuk considers emblematic of a broader pattern in rapid civic tech development: the Groq API key was hardcoded into the frontend JavaScript bundle. “Anyone who opens the browser’s developer tools can extract that key,” he explains. “In an enterprise context, that’s a cost exposure — someone runs up your API bill. In a refugee assistance context, the implications are different. If the API key provides access to user queries, an adversary can monitor what documents refugees are translating, what legal questions they’re asking, what locations they’re searching for. The attack surface isn’t the API cost — it’s the intelligence the API traffic reveals about vulnerable users.”
The fix is technically straightforward — route API calls through a backend proxy, which the team had already partially implemented. “The server-side routes exist in the codebase,” Demianchuk notes. “The team built the secure architecture and then bypassed it in the client for convenience, probably under time pressure. This is the exact pattern I see in enterprise development as well — the secure path exists, but the fast path is easier, and under deadline pressure, the fast path wins. In a standards-based security program, we prevent this with automated checks that flag client-side credential exposure before deployment. Hackathon teams don’t have that infrastructure.”
The ten-language support with proper RTL handling and the OCR-to-AI document translation pipeline demonstrated technical sophistication that made the security gap more concerning, not less. “This is clearly a capable engineering team,” Demianchuk says. “They’ve solved genuinely hard problems — bidirectional text rendering, offline data persistence, real-time geospatial queries. The security gap isn’t a skills problem. It’s a priorities problem. And that’s precisely what security standards are designed to address — they ensure that security isn’t the thing that gets deprioritized when the deadline approaches.”
When Surveillance Claims Social Good
RetailGuard-AI by team Big dawgs, finishing twenty-third at 3.194/5.00, presented Demianchuk with a submission that raised questions extending beyond technical security into the ethics of surveillance systems framed as social impact tools.
The project uses YOLO-based pose estimation and XGBoost classification to identify “suspicious behavior” in retail environments — a real-time surveillance system with a polished Streamlit dashboard. The team demonstrated a functioning ML pipeline with legitimate training data and working inference.
“The technical execution is competent,” Demianchuk acknowledges. “Getting real-time YOLO detection into a usable dashboard in 72 hours shows engineering skill. But the social good framing requires scrutiny that the technical evaluation alone doesn’t provide.”
Demianchuk’s concerns center on three areas that his work with security standards has made him acutely aware of. “First, the definition of ‘suspicious behavior’ is not a technical question — it’s a social and legal question. What training data defines suspicion? Has it been tested for demographic bias? In security standards, we distinguish between ‘vulnerability’ — a technical fact — and ‘risk’ — a contextual assessment that depends on who is being affected. This system’s risk model is implicitly defined by its training data, and that data isn’t documented or validated.”
“Second, the system stores video captures with faces visible. Under GDPR, that’s biometric data processing requiring explicit legal basis. Under many US state laws, it triggers notification requirements. For a hackathon, these are academic concerns. For deployment in a real retail environment, they’re compliance obligations with financial penalties.”
“Third, the dashboard labels individuals with ‘shoplifter certainty’ scores — a framing that presumes guilt based on pose estimation. In application security, we’re careful about the difference between ‘detection’ and ‘attribution.’ A system that detects unusual movement patterns is an analytics tool. A system that labels individuals as shoplifters based on those patterns is making an accusation. The distinction matters legally, ethically, and architecturally.”
The trained model artifact was excluded from the repository by a .gitignore rule that blocked all JSON files — an accidental consequence of a broad exclusion pattern. “This is a common configuration mistake, but for a security-sensitive application it has specific implications,” Demianchuk observes. “Without the model artifact in the repository, independent reviewers cannot verify what the model has learned, test for bias in its classifications, or reproduce its behavior. In security product evaluation, the inability to independently verify claims is itself a finding. Transparency isn’t just a virtue — it’s a requirement for trust.”
Demianchuk sees a constructive path forward for the project. “If the team reframed this as an ‘anomalous activity detection’ system rather than a ‘shoplifter identification’ system, removed face storage by default, documented the training data distribution, and added demographic bias testing to the pipeline, this becomes a genuinely useful tool for small business owners. The engineering is solid. The security and ethical framing needs to catch up.”
The Promise-Implementation Gap
Nurture Minds, finishing twenty-fourth at 3.100/5.00, addressed one of the most sensitive application domains in civic tech: mental health tools for neurodivergent children. The user interface was polished, the navigation was thoughtful, and the feature list was ambitious — FastAPI, OpenCV, TensorFlow, MongoDB, WebSocket, Docker, and AI-powered behavioral analysis.
“The mission is important and the interface shows genuine craft,” Demianchuk says. “But in my role at Cisco’s Security and Trust Organization, one of the most critical assessments we perform is verifying that claimed capabilities match actual implementation. For security products, this is a non-negotiable requirement — if a firewall claims to inspect SSL traffic but doesn’t, the security posture of every organization using it is compromised.”
Demianchuk found that several headline AI features were simulated rather than implemented. Video analysis returned hardcoded results after a timeout rather than performing actual inference. The claimed backend stack — TensorFlow, OpenCV, real-time emotion detection — was not present in the repository.
“In a hackathon, presenting a UI prototype is entirely valid,” he clarifies. “The problem arises when the documentation doesn’t distinguish between ‘implemented’ and ‘planned.’ A parent who downloads this application expecting AI-powered developmental analysis for their child is making a decision based on a capability claim that doesn’t match reality. In the standards world, we call this a ‘misleading security claim’ — and for products handling children’s data, the trust implications are significant.”
The deeper concern, from Demianchuk’s standards perspective, is about the data handling implications of the claimed features. “If the video analysis were real, the system would be processing biometric data of minors — one of the most regulated data categories globally. COPPA in the US, the Children’s Code in the UK, age-specific provisions under GDPR — all impose strict requirements on how children’s data is collected, processed, and stored. The fact that the feature is simulated means these compliance requirements don’t currently apply, but the architecture implies they will apply once the feature is built. Planning for that compliance should start now, not after deployment.”
Civic Data and the Accountability of Visualization
gap_map by team Rapunz, finishing twelfth at 3.681/5.00, took a creative approach to civic engagement that Demianchuk found conceptually compelling: representing urban resource distribution data using a retro 16-bit visual metaphor, where neighborhoods appear as terrain tiles whose characteristics reflect access to food, transit, healthcare, and other essential services.
“The visualization metaphor is genuinely inventive,” Demianchuk observes. “Making civic data feel like an explorable game world rather than a corporate dashboard could lower the barrier for community engagement. The Gini coefficient calculations are mathematically correct, and the budget-versus-reality diff mode is an interesting accountability mechanism.”
But the data layer raised immediate concerns. “Every time the system loads, it generates random scores for every neighborhood,” Demianchuk says. “New York City gets a different food access profile every session. For a data visualization tool, this is a critical gap — not because random data is inherently wrong during development, but because the tool’s entire value proposition depends on data integrity. If the visualization layer is polished and convincing, users may treat random data as factual. In a civic context, that means a community organizer might present fabricated access scores at a city council meeting without realizing the numbers were generated, not researched.”
From a security and trust perspective, Demianchuk connects this to the broader challenge of data provenance in civic technology. “In the CSAF standards work, we spend considerable effort on the question of data provenance — where did this information come from, who validated it, when was it last updated. Civic data tools need the same discipline. A map that shows food deserts needs to cite its data source, its update frequency, and its methodology. Without that provenance metadata, the visualization is an assertion without evidence.”
When the Driving Instructor Meets the Compliance Framework
DriveWise, finishing twenty-first at 3.244/5.00, approached road safety education through an interactive driving simulator with a novel dual scoring system separating legal compliance from ethical driving behavior. The concept — teaching drivers not just to follow laws but to cultivate responsible driving culture — resonated with Demianchuk’s standards-oriented mindset.
“The dual scoring system is philosophically interesting,” he says. “In security standards, we distinguish between ‘compliance’ — meeting minimum documented requirements — and ‘security posture’ — the actual resilience of a system. A system can be compliant and still vulnerable. Similarly, a driver can follow every traffic law and still drive irresponsibly. The team captured that distinction in their game mechanics.”
The technical implementation demonstrated strong domain knowledge — Ackermann steering physics, SUMO traffic model ports, and fourteen fully implemented lessons with real Motor Vehicle Administration citations. The visual design choice of literally darkening the world around a reckless driver showed creative game design thinking.
“But the project needed a presentation,” Demianchuk notes. “In security assessments, documentation is not optional — it’s the artifact that allows third parties to verify your claims. A simulator this sophisticated deserves a demonstration that shows the pedagogical impact. Without it, evaluators are limited to reading code and inferring intent.”
What Standards Thinking Teaches About Civic Technology
Across his evaluations, Demianchuk identified a consistent theme: civic tech development operates without the security frameworks that enterprise and commercial software development have spent decades building. And that gap matters more, not less, because civic tech serves the users who are most vulnerable to security failures.
“Enterprise software has compliance frameworks — SOC 2, ISO 27001, NIST CSF, FedRAMP — that force development teams to consider security from the architecture phase,” he explains. “Civic tech has none of that. There’s no certification body for refugee assistance platforms. No compliance standard for children’s mental health apps built by three developers in 72 hours. The absence of those frameworks doesn’t mean the security requirements are lower — it means the development teams are entirely responsible for setting their own standards.”
This observation connects directly to Demianchuk’s work on the OASIS committees. “The CSAF standard exists because the industry recognized that vulnerability advisory information needed a common format to be actionable across organizations. Civic tech needs something analogous — not necessarily a formal standard, but a shared understanding of minimum security requirements for applications serving vulnerable populations. What data do you encrypt? What access controls do you implement? How do you handle credential management? How do you communicate data practices to non-technical users?”
“The teams at sudo make world demonstrated that the engineering talent exists to build impactful civic tools,” Demianchuk concludes. “What’s missing is the security scaffolding that ensures those tools don’t create new risks for the people they’re designed to protect. That scaffolding doesn’t have to come from a standards body — it can start with a checklist, a peer-review process, or a shared library of security patterns for civic applications. But it has to come from somewhere. Because the users of civic tech — refugees, children, disaster victims, underserved communities — are precisely the people who can least afford the consequences of a security failure.”
sudo make world 2026 was organized by Hackathon Raptors, a Community Interest Company (CIC #15557917) supporting innovation in software development. The event brought together 26 teams over 72 hours to build open-source tools for social good, evaluated by a panel of 38 judges across five weighted criteria: Impact & Vision (35%), Technical Execution (25%), Innovation (20%), Usability (15%), and Presentation (5%). Sergii Demianchuk served as a senior judge evaluating projects in the competition’s third evaluation batch.