I'm a backend and infrastructure engineer learning my way through cloud, security, and DevOps β with a growing interest in how AI fits into all of it.
This GitHub is where I try things out, experiment with ideas, and take notes on what I learn. Nothing here is final β just projects in progress, tools I'm playing with, and thoughts I'm writing down to get better at my craft. There are learning notes, and proof-of-concept tools that reflect my curiosity across systems, security, and AI infrastructure. Many of these are early-stage β some are complete, some are seeds for future research.
Lately, I've been exploring:
- Building small tools in Go and Rust
- Working with orchestration systems like Temporal
- Looking into security use-cases for AI
- Writing utilities to make things more observable, testable, and explainable
That's all. Just trying to stay curious and consistent.
I maintain a basic concept project repository at open-concept-lab across security, AI, and systems that aim to show:
- β My hands-on understanding of low-level mechanics
- π§ My ability to document, reason, and iterate publicly
- π My interest in bridging theory with practical tooling
Specifically, I wanted to more closely align myself with research over:
- Applied AI infrastructure and security
- Systems and tooling for research reproducibility
- Open-source DevSecOps and observability
- π Projects focused on DevSecOps, observability, and security automation
- π§ Tools that explore AI + system design tradeoffs, especially around reliability and compliance
- π§ͺ Research-inspired concepts like reproducible evaluation, secure AI deployment, and pipeline introspection
- π Thought experiments and problem-driven brainstorming β from protocol-level ideas to practical CLI tools
Contributing to platform engineering and AI infrastructure at Securiti.ai / Securiti @ Github, focused on enabling safe, intelligent use of data and AI across cloud environments.
My work spans:
- π§± Building scalable data-driven pipelines and access frameworks for cloud platforms
- βοΈ Improving orchestration and system design for identity, security, and data governance workflows
- π Supporting reliable infrastructure for secure data operations at scale
- Rubix982
π‘οΈ Tools and experiments focused on cybersecurity, DevSecOps, and data visibility.
This section contains working or semi-working tools related to packet analysis, vulnerability research, network insights, and cloud security. These projects are meant to explore real-world risks, automate tasks, and improve observability across systems.
Click to expand
SecChapter
β Long-term documentation of my journey in Cloud, Cybersecurity, and DevOps.StructDiff
β JSON structural diffing tool for easier inspection of data changes.ScrapChat
β Tool to organize ChatGPT outputs into readable markdown sections.ps
β A packet sniffer and network monitor built in Rust.argo-apps
β ArgoCD-based demos for distributed system orchestration.NetPulse
β Periodic internet speed monitor for local analysis.VulnData
β Future dataset project for vulnerability scraping and LLM-assisted security insight.CyberScope
β Security dataset analysis based on public Kaggle sources.
π§° A mix of utilities, demos, and small projects built to test ideas or learn something new.
This is where I try things out that donβt fit neatly into βsecurityβ or βinfraβ buckets β tooling experiments, UX ideas, or one-off playgrounds. Some are CLI tools, others are frontend visualizations or microservices.
Click to expand
Cyberflow
β Temporal + Go-based scanner for threat intel, enriched and cached locally.Triage
β Electron-based issue triage dashboard with D3 and DuckDB.Thoughts
β A CLI utility for fast personal note-taking.EsMappingTests
β Elasticsearch mapping experiments.SimpleMicroservice
β Basic microservice starter template.network_agent
β Local network statistics monitoring agent.http-showcase
β Demos of HTTP/1.1, HTTP/2, and HTTP/3 features.go-ssl
β Go project to inspect SSL/TLS issues.GoRoutinesAndConcurrency
β Go concurrency exploration.
β οΈ These are just raw, early-stage ideas β not finished projects.
This section is where I document security + AI + infra tools Iβd like to build (or see built). Most of these are speculative, based on problems Iβve encountered, read about, or imagined from industry trends.
Some may never get past a README. Others might turn into actual code someday. Either way, this is my public lab β a space to think out loud and connect dots.
Click to expand
Click to expand
FocusFeed is a personal, LLM-powered command center for daily knowledge and updates.
An MCP-style system that connects LLMs (like ChatGPT or Claude) to your key information feeds β so you wake up to a structured, summarized digest of everything that matters.
- Overwhelming inboxes and news feeds
- Time wasted identifying important content
- Passive reading habits
- Loss of context and connection between information sources
- π¬ Pulls from Gmail, GitHub, Hacker News, RSS, Reddit, and arXiv
- π§ GPT/Claude summarization and context-based commentary
- π Highlights key terms and vocabulary
- π Generates digest in Markdown, email, or TUI
- π Easily extensible with new tools and endpoints
- π οΈ 100% self-hosted / local by design β no vendor lock-in
Click to expand
PromptSnare detects adversarial prompt injection attempts in LLM systems and enforces safe prompt structures.
- Prompt manipulation degrading model behavior
- Injection attacks leaking private model data
- Loss of trust in enterprise AI interfaces
- π Scans for adversarial patterns using token inspection and prompt history
- π§± Enforces safe prompt templates
- π‘οΈ Compatible with OpenAI, local LLMs, and prompt chaining pipelines
Click to expand
InferGuard is a usage anomaly detector for LLM APIs that prevents stolen API token abuse and inference cost leaks.
- Unnoticed token theft and abuse
- Sudden billing spikes from inference load
- Lack of behavioral access monitoring for AI APIs
- π Tracks usage spikes and frequency patterns
β οΈ Raises alerts on behavioral shifts- π§© Hooks into billing dashboards and monitoring stacks
Click to expand
PoisonDetect identifies tampering, bias, and poisoning in ML training datasets.
- Silent model poisoning in open-source data
- Training on duplicated or biased samples
- Lack of confidence in fine-tuning sources
- π§ Clustering + anomaly detection on labels and samples
- π§Ή Noise filtering and scoring
- π Integration with dataset pre-processing workflows
Click to expand
AIComplianceBot checks AI pipelines against privacy and security compliance standards like GDPR, HIPAA, and ISO 27001.
- AI use in regulated industries (health, finance) without auditing
- Lack of paper trails for data access and processing
- Inability to show regulators that AI systems are compliant
- π§Ύ Scans data flow in AI APIs and pipelines
- π Flags PII exposure in prompts, logs, and models
- π Generates audit-ready reports
Click to expand
ModelDeployer brings GitOps-style deployment to ML models, ensuring consistency across dev/stage/prod.
- Drift between model versions in different environments
- Manual copy-pasting of weights and configs
- Accidental use of outdated or incorrect models
- ποΈ Hash-based versioning of weights and configs
- π Rollbacks and deploy histories
- βοΈ Works with HuggingFace, ONNX, PyTorch, etc.
Click to expand
LLMHealth offers real-time observability for LLM inference pipelines: latency, error rates, and cost insights.
- Inference slowdowns going undetected
- Silent memory leaks and performance regressions
- Difficulty debugging inference failures in prod
- π Prometheus/Grafana integration
- π OOM and latency spike alerts
- π§© Token-level profiling
Click to expand
AccessHawk tracks API token usage and behavior in LLM clusters to prevent insider threats and shadow access.
- Insider misuse of sensitive LLM features
- No visibility into who accessed what and when
- Long-lived, unused API tokens going unchecked
- π΅οΈ Role-based access maps
- π Heatmap of API call activity
β οΈ Alerting for outlier behavior
Click to expand
AirGapLLM is a self-hosted, air-gapped LLM deployment system with built-in access controls and observability.
- Regulatory restrictions on cloud AI use
- Need for full on-prem control and security
- Risk of leaking data through public APIs
- π Sandboxed GPU runners (Docker, Firecracker)
- π§Ύ Logs every API call with signed hashes
- πͺ Access throttling and prompt whitelisting
Click to expand
ExplainTrail creates a traceable prompt-response history with metadata to explain AI decisions.
- βBlack boxβ behavior in enterprise AI
- Legal/compliance challenges for explain-ability
- Lack of reproducibility in LLM-driven actions
- π Logs prompt, context, model, and response
- π Metadata linking and version stamping
- β Markdown or JSON-based explain-ability format
Click to expand
ModelAudit β Immutable logging + role-based audit trail for model access.
Tags - AI, Security
- Model access without accountability
- No logging = no blame if things go wrong
Click to expand
InferLoadBalancer β Smart batching and token-limit prediction for model serving
Tags - AI, Infrastructure
- Large model deployment eats too much memory
- Infra teams struggle with OOM crashes
Click to expand
LLMHealth β Prometheus/Grafana exporter for inference metrics.
Tags - AI, Infrastructure
- Model version mismatch across dev/stage/prod
- Unexpected behaviors, hard to debug
Click to expand
BatchLLM β Batching layer that groups inference calls based on size/priority.
Tags - AI, Infrastructure
- GPU resources are under-utilized
- Wasted compute, high infra costs
Click to expand
ModularServe β Declarative YAML config for multi-modal inference APIs.
Tags - AI, Infrastructure
- Multi-modal model chaos
- Text, image, audio all need different runtimes
Click to expand
SecretRadar β Scans K8s, Vault, and envs for unmanaged secrets.
Tags - Security, Infrastructure
- No visibility into what secrets exist in your cluster
- Secret sprawl = breach risk.
Click to expand
CIWatchdog - Sign and verify every CI artifact, from code to container.
Tags - Security, Infrastructure
- CI/CD pipelines are easily poisoned
- One bad push = widespread compromise
Click to expand
InfraMirror - Compares actual cloud state with IaC and highlights drifts
Tags - Security, Infrastructure
- IaC drift causes silent vulnerabilities
- Prod != Git = blind spots
Click to expand
GhostInfra - Builds a graph of cloud assets and flags ownerless nodes
Tags - Security, Infrastructure
- Shadow infra gets spun up and forgotten
- Unbilled/unaudited systems = easy attack targets
Click to expand
LLMOrchestrator - CLI or GUI to connect models, preprocessors, and filters like a DAG
Tags - Platform, Utility Tooling
- Difficult to orchestrate multiple models/tools
Click to expand
LLMSigner - Adds cryptographic signing to every prompt-response pair
Tags - Platform, Utility Tooling
- Need signed metadata for AI actions
Click to expand
SecureLLMTestKit - Dockerized replayable attack/test pipeline with logs
Tags - Platform, Utility Tooling
- Security researchers need reproducible testbeds
Click to expand
LLMInfraLite - Local GPU/CPU inference deployer + observability bundle
Tags - Platform, Utility Tooling
- Developers need local AI infra that just works
Click to expand
AISecGraph - Visual dependency + threat model of entire AI pipeline
Tags - Platform, Utility Tooling
- Hard to reason about AI system security posture
ποΈ Configs, notes, and personal setups that help me stay productive.
This section includes my Neovim setup, cheat sheets, reusable code snippets, and dev environment configs. Sharing them here in case theyβre helpful to others β and to keep my own reference centralized.
Click to expand
nvim
β My personal Neovim configuration.diary
β Personal learnings and re-usable knowledge notes.CodeToolBox
β Handy scripts and productivity utilities.LangLib
β Competitive programming language utility repo.kali-linux-ctf
β Vagrant + Kali setup for security challenges.LeetCode
β My solutions to Leetcode problems.