Strategy: The Systems Thinker's Approach
Initial Discovery and Strategic Assessment
On June 7, 2025, one week before the DoD Cyber Sentinel Challenge, I opened a PDF that had been sitting in my inbox. As someone who had never participated in a CTF competition before, I wasn’t initially interested – until I saw one game-changing detail: participants were allowed to use AI assistance. This single rule change transformed the competition from a technical skills test into a systems architecture challenge.
The competition parameters were straightforward: 29 challenges across five cybersecurity categories, eight hours to solve them, and $15,000 in prize money with $5,000 for first place. But what caught my attention wasn’t the prizes or the prestige. It was the realization that the organizers had fundamentally changed the game by allowing AI, and most participants probably wouldn’t understand the implications.
Identifying the Real Challenge
My situation presented clear constraints: I can’t code. My computer has only 8GB of RAM. I had one week to prepare while juggling job searching and completing two fast-paced courses for my AAS degree. Traditional wisdom would suggest I was unqualified to compete against DoD agents and experienced hackers.
But I process challenges differently. Where others see sequential steps to learn, I see systems to architect. The question wasn’t “How do I learn to code in a week?” It was “How do I build a system that eliminates the need for coding?”
The Persistent Memory Solution
The first major obstacle emerged immediately: I had decided to use Claude Code as my primary tool for the competition, but this AI assistant has no memory between sessions. Every conversation starts fresh, meaning complex problem-solving becomes impossible across the eight-hour competition. Most people would accept this limitation. I saw it as a design constraint that needed a systematic solution.
The solution came from recognizing that while Claude couldn’t remember, I could create an external memory system that Claude could read. This led to the development of a comprehensive file-based memory architecture:
The CTF Command Center Structure
I created a competition headquarters at /home/chima/Music/Banger/DoD_Cyber_Sentinel/ with carefully designed subdirectories:
- 00_Competition_Intel/: Contains the competition rules, challenge tracking, and time strategy
- 01_Active_Challenges/: Current work with separate folders for multi-instance coordination
- 02_Claude_Memory/: The persistent brain – session states, discovered patterns, successful techniques
- 03_Completed/: Archived solutions for pattern analysis
- 04_Tools_Scripts/: Automation scripts and command references
- 05_Competition_Apps/: Tool organization by category
Each folder serves a specific purpose in the larger system. The beauty lies not in the folders themselves, but in how they create a living, breathing operational framework that maintains context across sessions and crashes.
Multi-Instance Architecture: Turning Limitation into Advantage
My 8GB RAM limitation could have been crippling. Instead, it forced an innovation: multi-instance coordination. Rather than trying to run everything on one powerful system, I designed for parallel processing across multiple lightweight instances.
The breakthrough came from realizing I could run 2-3 Claude terminals simultaneously, each working on different challenges without conflicts. This isn’t just multitasking – it’s true parallel processing with intelligent coordination. While Instance 1 analyzes forensics challenges, Instance 2 performs web reconnaissance, and Instance 3 cracks passwords. The instances don’t need to communicate because the file system serves as shared memory.
Strategic Time Management
Eight hours, 29 challenges. That’s roughly 16 minutes per challenge with no breaks. But linear time allocation assumes all challenges are equal, which they’re not. I developed a five-phase competition strategy:
- Phase 1 (90 minutes): Sweep all Very Easy challenges for quick points
- Phase 2 (120 minutes): Target Easy challenges in comfort categories
- Phase 3 (150 minutes): Medium challenges with best point/time ratios
- Phase 4 (90 minutes): Strategic Hard challenge attempts
- Phase 5 (30 minutes): Final sprint for any remaining points
This isn’t just a schedule – it’s a point maximization algorithm disguised as time management.
Analyzing the Competition Landscape
The interesting question wasn’t whether I could compete, but who I was really competing against. Experienced cybersecurity professionals with advanced technical skills? They’re formidable but predictable. They’ll approach challenges sequentially, relying on deep expertise in specific areas.
The real threat comes from other systems thinkers – people who, like me, see the competition not as 29 individual challenges but as a single system to optimize. These competitors would inevitably arrive at similar solutions: persistent memory, parallel processing, AI integration. Not because they copied my approach, but because systematic analysis reveals optimal solutions.
This realization shifted my strategy. If other systems thinkers exist in the competition, completing 25-27 challenges might not suffice. I’d need all 29. The competition transforms from efficiency optimization to completeness enforcement.
The Philosophical Framework
There’s a fundamental difference between sequential and systematic processing that most people never articulate. A math genius processes equations in sequence, following logical steps from premise to conclusion. A systems thinker sees the entire problem space simultaneously, with all relationships and dependencies existing as a complete model.
This explains why building this entire CTF system felt effortless – like watching a movie rather than doing work. The system already existed in my mind; I was merely transcribing it into reality. It’s also why I’m confident other outlier systems thinkers would arrive at similar solutions. When you can see the entire system, optimal configurations become self-evident.
Tools and Automation Strategy
Rather than learning to use every tool manually, I created automation layers:
forensics_initial_scan.py: Analyzes any file with multiple tools automaticallyquick_start.sh: Launches the competition environment with one command- Pattern matching scripts: Identify challenge types and suggest approaches
The goal wasn’t to eliminate human judgment but to eliminate human execution of routine tasks. I orchestrate; the system executes.
Natural Language Integration
Perhaps the most powerful innovation was creating natural language interfaces for competition control. Instead of remembering complex commands, I built conversational triggers:
- “status” → Complete overview of all active instances
- “continue CTF work” → Resume with full context
- “work on [challenge]” → Begin focused analysis
This transforms the interaction from command-line complexity to strategic conversation.
Resource Planning and Mid-Competition Strategy
An interesting realization emerged as I thought through the competition dynamics. The moment my system successfully executes 2 to 3 tasks, I’ll essentially know what the final results will look like. The pattern recognition and systematic approach mean early performance directly predicts final outcomes. There’s no randomness in systematic execution – only consistent application of optimized processes.
With this predictive capability in mind, I factored in resource constraints. My current $100 Claude plan might not provide sufficient usage for eight hours of intensive multi-instance processing. I made a strategic decision: if the early tasks confirm the system is performing as designed, I’ll upgrade mid-competition to the $200 plan that offers 20X usage. This isn’t an expense – it’s an investment in guaranteed execution of all 29 challenges.
Competition Day Protocol
With all systems in place, competition day becomes execution rather than improvisation:
- Launch the command center with
quick_start.sh - Open three Claude instances for parallel processing
- Begin Phase 1 sweep of Very Easy challenges
- Track progress in real-time with
challenge_tracker.txt - Let the system handle crashes – resume from
session_state.txt - Focus on strategic decisions while AI handles technical execution
Results and Insights
What started as preparation for a CTF competition became something larger: a demonstration that systems thinking can overcome traditional limitations. Without writing a single line of code, I built:
- A crash-resilient operational framework
- Parallel processing capabilities that triple effective output
- Persistent memory that maintains context across sessions
- Pattern recognition systems that improve throughout competition
- Natural language interfaces that eliminate syntax barriers
The Meta-Learning
This experience crystallized a fundamental insight: in an AI-enabled world, the ability to architect intelligent systems becomes a powerful complementary skill to traditional technical expertise. By building a system that maximizes human-AI collaboration, I’ve positioned myself not just to compete, but to demonstrate a new paradigm for technical problem-solving.
Whether I win first place matters less than what I’ve proven: with systems thinking and strategic AI integration, traditional barriers become design opportunities. The constraint of not coding became the catalyst for innovation. The limitation of 8GB RAM forced elegant efficiency. The one-week preparation timeline demanded systematic rather than sequential solutions.
This is how I approach every challenge – not as a problem to solve, but as a system to understand and optimize. The CTF competition simply provided a perfect demonstration ground for this philosophy in action.
Leave a Comment