Skip to content
View Rubix982's full-sized avatar
πŸ‡΅πŸ‡°
πŸ‡΅πŸ‡°

Organizations

@NVIDIAGameWorks @Software-Development-Pakistan @dotnet-foundation @OSSpk @Developer-Student-Club-NUCES @karachi-katas @MLH-Fellowship @EddieHubCommunity @codeDroid1

Block or report Rubix982

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Rubix982/README.md

Rubix982

About Me

I'm a backend and infrastructure engineer learning my way through cloud, security, and DevOps β€” with a growing interest in how AI fits into all of it.

This GitHub is where I try things out, experiment with ideas, and take notes on what I learn. Nothing here is final β€” just projects in progress, tools I'm playing with, and thoughts I'm writing down to get better at my craft. There are learning notes, and proof-of-concept tools that reflect my curiosity across systems, security, and AI infrastructure. Many of these are early-stage β€” some are complete, some are seeds for future research.

Lately, I've been exploring:

  • Building small tools in Go and Rust
  • Working with orchestration systems like Temporal
  • Looking into security use-cases for AI
  • Writing utilities to make things more observable, testable, and explainable

That's all. Just trying to stay curious and consistent.

Research Alignment

I maintain a basic concept project repository at open-concept-lab across security, AI, and systems that aim to show:

  • βœ… My hands-on understanding of low-level mechanics
  • 🧠 My ability to document, reason, and iterate publicly
  • πŸ“Ž My interest in bridging theory with practical tooling

Specifically, I wanted to more closely align myself with research over:

  • Applied AI infrastructure and security
  • Systems and tooling for research reproducibility
  • Open-source DevSecOps and observability

My Profile Is About ...

  • πŸ” Projects focused on DevSecOps, observability, and security automation
  • 🧠 Tools that explore AI + system design tradeoffs, especially around reliability and compliance
  • πŸ§ͺ Research-inspired concepts like reproducible evaluation, secure AI deployment, and pipeline introspection
  • πŸ“š Thought experiments and problem-driven brainstorming β€” from protocol-level ideas to practical CLI tools

Currently

Contributing to platform engineering and AI infrastructure at Securiti.ai / Securiti @ Github, focused on enabling safe, intelligent use of data and AI across cloud environments.

My work spans:

  • 🧱 Building scalable data-driven pipelines and access frameworks for cloud platforms
  • βš™οΈ Improving orchestration and system design for identity, security, and data governance workflows
  • πŸ” Supporting reliable infrastructure for secure data operations at scale

Table of Contents


Security Projects

πŸ›‘οΈ Tools and experiments focused on cybersecurity, DevSecOps, and data visibility.

This section contains working or semi-working tools related to packet analysis, vulnerability research, network insights, and cloud security. These projects are meant to explore real-world risks, automate tasks, and improve observability across systems.

Click to expand
  1. SecChapter β€” Long-term documentation of my journey in Cloud, Cybersecurity, and DevOps.
  2. StructDiff β€” JSON structural diffing tool for easier inspection of data changes.
  3. ScrapChat β€” Tool to organize ChatGPT outputs into readable markdown sections.
  4. ps β€” A packet sniffer and network monitor built in Rust.
  5. argo-apps β€” ArgoCD-based demos for distributed system orchestration.
  6. NetPulse β€” Periodic internet speed monitor for local analysis.
  7. VulnData β€” Future dataset project for vulnerability scraping and LLM-assisted security insight.
  8. CyberScope β€” Security dataset analysis based on public Kaggle sources.

Ideas & Misc Tools

🧰 A mix of utilities, demos, and small projects built to test ideas or learn something new.

This is where I try things out that don’t fit neatly into β€œsecurity” or β€œinfra” buckets β€” tooling experiments, UX ideas, or one-off playgrounds. Some are CLI tools, others are frontend visualizations or microservices.

Click to expand
  1. Cyberflow β€” Temporal + Go-based scanner for threat intel, enriched and cached locally.
  2. Triage β€” Electron-based issue triage dashboard with D3 and DuckDB.
  3. Thoughts β€” A CLI utility for fast personal note-taking.
  4. EsMappingTests β€” Elasticsearch mapping experiments.
  5. SimpleMicroservice β€” Basic microservice starter template.
  6. network_agent β€” Local network statistics monitoring agent.
  7. http-showcase β€” Demos of HTTP/1.1, HTTP/2, and HTTP/3 features.
  8. go-ssl β€” Go project to inspect SSL/TLS issues.
  9. GoRoutinesAndConcurrency β€” Go concurrency exploration.

Brainstorming Only

⚠️ These are just raw, early-stage ideas β€” not finished projects.

This section is where I document security + AI + infra tools I’d like to build (or see built). Most of these are speculative, based on problems I’ve encountered, read about, or imagined from industry trends.

Some may never get past a README. Others might turn into actual code someday. Either way, this is my public lab β€” a space to think out loud and connect dots.

Click to expand

FocusFeed

Click to expand

FocusFeed is a personal, LLM-powered command center for daily knowledge and updates.

An MCP-style system that connects LLMs (like ChatGPT or Claude) to your key information feeds β€” so you wake up to a structured, summarized digest of everything that matters.

Problems It Solves

  • Overwhelming inboxes and news feeds
  • Time wasted identifying important content
  • Passive reading habits
  • Loss of context and connection between information sources

Key Features

  • πŸ“¬ Pulls from Gmail, GitHub, Hacker News, RSS, Reddit, and arXiv
  • 🧠 GPT/Claude summarization and context-based commentary
  • πŸ“š Highlights key terms and vocabulary
  • πŸ“† Generates digest in Markdown, email, or TUI
  • πŸ”Œ Easily extensible with new tools and endpoints
  • πŸ› οΈ 100% self-hosted / local by design β€” no vendor lock-in

PromptSnare

Click to expand

PromptSnare detects adversarial prompt injection attempts in LLM systems and enforces safe prompt structures.

Problems It Solves

  • Prompt manipulation degrading model behavior
  • Injection attacks leaking private model data
  • Loss of trust in enterprise AI interfaces

Key Features

  • πŸ” Scans for adversarial patterns using token inspection and prompt history
  • 🧱 Enforces safe prompt templates
  • πŸ›‘οΈ Compatible with OpenAI, local LLMs, and prompt chaining pipelines

InferGuard

Click to expand

InferGuard is a usage anomaly detector for LLM APIs that prevents stolen API token abuse and inference cost leaks.

Problems It Solves

  • Unnoticed token theft and abuse
  • Sudden billing spikes from inference load
  • Lack of behavioral access monitoring for AI APIs

Key Features

  • πŸ“ˆ Tracks usage spikes and frequency patterns
  • ⚠️ Raises alerts on behavioral shifts
  • 🧩 Hooks into billing dashboards and monitoring stacks

PoisonDetect

Click to expand

PoisonDetect identifies tampering, bias, and poisoning in ML training datasets.

Problems It Solves

  • Silent model poisoning in open-source data
  • Training on duplicated or biased samples
  • Lack of confidence in fine-tuning sources

Key Features

  • 🧠 Clustering + anomaly detection on labels and samples
  • 🧹 Noise filtering and scoring
  • πŸ“Š Integration with dataset pre-processing workflows

AIComplianceBot

Click to expand

AIComplianceBot checks AI pipelines against privacy and security compliance standards like GDPR, HIPAA, and ISO 27001.

Problems It Solves

  • AI use in regulated industries (health, finance) without auditing
  • Lack of paper trails for data access and processing
  • Inability to show regulators that AI systems are compliant

Key Features

  • 🧾 Scans data flow in AI APIs and pipelines
  • πŸ” Flags PII exposure in prompts, logs, and models
  • πŸ“‹ Generates audit-ready reports

ModelDeployer

Click to expand

ModelDeployer brings GitOps-style deployment to ML models, ensuring consistency across dev/stage/prod.

Problems It Solves

  • Drift between model versions in different environments
  • Manual copy-pasting of weights and configs
  • Accidental use of outdated or incorrect models

Key Features

  • πŸ—ƒοΈ Hash-based versioning of weights and configs
  • πŸ” Rollbacks and deploy histories
  • βš™οΈ Works with HuggingFace, ONNX, PyTorch, etc.

LLMHealth

Click to expand

LLMHealth offers real-time observability for LLM inference pipelines: latency, error rates, and cost insights.

Problems It Solves

  • Inference slowdowns going undetected
  • Silent memory leaks and performance regressions
  • Difficulty debugging inference failures in prod

Key Features

  • πŸ“ˆ Prometheus/Grafana integration
  • πŸ›‘ OOM and latency spike alerts
  • 🧩 Token-level profiling

AccessHawk

Click to expand

AccessHawk tracks API token usage and behavior in LLM clusters to prevent insider threats and shadow access.

Problems It Solves

  • Insider misuse of sensitive LLM features
  • No visibility into who accessed what and when
  • Long-lived, unused API tokens going unchecked

Key Features

  • πŸ•΅οΈ Role-based access maps
  • πŸ“Š Heatmap of API call activity
  • ⚠️ Alerting for outlier behavior

AirGapLLM

Click to expand

AirGapLLM is a self-hosted, air-gapped LLM deployment system with built-in access controls and observability.

Problems It Solves

  • Regulatory restrictions on cloud AI use
  • Need for full on-prem control and security
  • Risk of leaking data through public APIs

Key Features

  • πŸ” Sandboxed GPU runners (Docker, Firecracker)
  • 🧾 Logs every API call with signed hashes
  • πŸšͺ Access throttling and prompt whitelisting

ExplainTrail

Click to expand

ExplainTrail creates a traceable prompt-response history with metadata to explain AI decisions.

Problems It Solves

  • β€œBlack box” behavior in enterprise AI
  • Legal/compliance challenges for explain-ability
  • Lack of reproducibility in LLM-driven actions

Key Features

  • πŸ“š Logs prompt, context, model, and response
  • πŸ”— Metadata linking and version stamping
  • βœ… Markdown or JSON-based explain-ability format

ModelAudit

Click to expand

ModelAudit β€” Immutable logging + role-based audit trail for model access.

Tags - AI, Security

Problems It Solves

  • Model access without accountability

Why It Hurts

  • No logging = no blame if things go wrong

InferLoadBalancer

Click to expand

InferLoadBalancer β€” Smart batching and token-limit prediction for model serving

Tags - AI, Infrastructure

Problems It Solves

  • Large model deployment eats too much memory

Why It Hurts

  • Infra teams struggle with OOM crashes

LLMTripwire

Click to expand

LLMHealth β€” Prometheus/Grafana exporter for inference metrics.

Tags - AI, Infrastructure

Problems It Solves

  • Model version mismatch across dev/stage/prod

Why It Hurts

  • Unexpected behaviors, hard to debug

BatchLLM

Click to expand

BatchLLM β€” Batching layer that groups inference calls based on size/priority.

Tags - AI, Infrastructure

Problems It Solves

  • GPU resources are under-utilized

Why It Hurts

  • Wasted compute, high infra costs

ModularServe

Click to expand

ModularServe β€” Declarative YAML config for multi-modal inference APIs.

Tags - AI, Infrastructure

Problems It Solves

  • Multi-modal model chaos

Why It Hurts

  • Text, image, audio all need different runtimes

SecretRadar

Click to expand

SecretRadar β€” Scans K8s, Vault, and envs for unmanaged secrets.

Tags - Security, Infrastructure

Problems It Solves

  • No visibility into what secrets exist in your cluster

Why It Hurts

  • Secret sprawl = breach risk.

CIWatchdog

Click to expand

CIWatchdog - Sign and verify every CI artifact, from code to container.

Tags - Security, Infrastructure

Problems It Solves

  • CI/CD pipelines are easily poisoned

Why It Hurts

  • One bad push = widespread compromise

InfraMirror

Click to expand

InfraMirror - Compares actual cloud state with IaC and highlights drifts

Tags - Security, Infrastructure

Problems It Solves

  • IaC drift causes silent vulnerabilities

Why It Hurts

  • Prod != Git = blind spots

GhostInfra

Click to expand

GhostInfra - Builds a graph of cloud assets and flags ownerless nodes

Tags - Security, Infrastructure

Problems It Solves

  • Shadow infra gets spun up and forgotten

Why It Hurts

  • Unbilled/unaudited systems = easy attack targets

LLMOrchestrator

Click to expand

LLMOrchestrator - CLI or GUI to connect models, preprocessors, and filters like a DAG

Tags - Platform, Utility Tooling

Problems It Solves

  • Difficult to orchestrate multiple models/tools

LLMSigner

Click to expand

LLMSigner - Adds cryptographic signing to every prompt-response pair

Tags - Platform, Utility Tooling

Problems It Solves

  • Need signed metadata for AI actions

SecureLLMTestKit

Click to expand

SecureLLMTestKit - Dockerized replayable attack/test pipeline with logs

Tags - Platform, Utility Tooling

Problems It Solves

  • Security researchers need reproducible testbeds

LLMInfraLite

Click to expand

LLMInfraLite - Local GPU/CPU inference deployer + observability bundle

Tags - Platform, Utility Tooling

Problems It Solves

  • Developers need local AI infra that just works

AISecGraph

Click to expand

AISecGraph - Visual dependency + threat model of entire AI pipeline

Tags - Platform, Utility Tooling

Problems It Solves

  • Hard to reason about AI system security posture

Personal & Configs

πŸ—’οΈ Configs, notes, and personal setups that help me stay productive.

This section includes my Neovim setup, cheat sheets, reusable code snippets, and dev environment configs. Sharing them here in case they’re helpful to others β€” and to keep my own reference centralized.

Click to expand
  1. nvim β€” My personal Neovim configuration.
  2. diary β€” Personal learnings and re-usable knowledge notes.
  3. CodeToolBox β€” Handy scripts and productivity utilities.
  4. LangLib β€” Competitive programming language utility repo.
  5. kali-linux-ctf β€” Vagrant + Kali setup for security challenges.
  6. LeetCode β€” My solutions to Leetcode problems.

Pinned Loading

  1. baioc/crowd-sourced baioc/crowd-sourced Public archive

    Open source "ML As A Service": high quality labels to train your models!

    SCSS 3

  2. mapillary/mapillary-python-sdk mapillary/mapillary-python-sdk Public

    A Python 3 library built on the Mapillary API v4 to facilitate retrieving and working with Mapillary data.

    Python 49 21