Skip to content
View fabioc-aloha's full-sized avatar
🏠
Working from home
🏠
Working from home

Highlights

  • Pro

Block or report fabioc-aloha

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
fabioc-aloha/README.md

Dr. Fabio Correa, DBA

πŸ”Ž Agent Fleet Insights

Hand-authored synthesis of fleet themes, capabilities, and overall agent composition.

The portfolio you are reading starts with a doctoral dissertation. The AI Readiness Scale, validated on N=523 with a CFI of .975 and explaining 85.2% of variance in behavioral intention, was built to measure how individuals and organizations actually adopt AI. What the data showed is that adoption fails most often where the AI partner has no memory, no calibration, and no proactive judgment. The rest of this work is the program that grew from that finding: a working theory of human and AI partnership where memory compounds for the organization, the AI grades its own evidence, and the same calibrated teammate shows up across research, healthcare, creative production, cloud operations, and education without losing voice or standards.

Β 

Each pinned repository is a different proof of the same thesis. AIRS_Data_Analysis is the empirical anchor, the IRB-approved dissertation behind every adoption claim downstream. airs-enterprise ships the validated instrument as a five-minute, twenty-nine-language SaaS assessment. AlexPapers carries the program into peer-reviewed venues across HCI, cognitive science, governance, and management. alex-cognitive-architecture is the partnership itself, an AI that remembers, reasons with calibrated confidence, acts proactively, and commands a roster of specialists. AlexMaster, Alex_ACT_Edition, Alex_ACT_Supervisor, and Alex_Skill_Mall productize the brain as a governance layer, a deployable heir template, a curator role, and a public skills marketplace. fabioc-aloha, the page you are reading, is the live evidence: a multi-model pipeline rebuilt every morning and gated on a separate model's review. BrainBenchmark grades any LLM across seventeen cognitive dimensions and one hundred forty-two challenges to keep the calibration claim honest. AlexMedia carries the partnership into creative production all the way to physical merchandise, health applies it at its hardest clinical setting, LearnAlex puts the methodology in public hands through a free training platform, CorreaX runs the cloud the fleet depends on, HeadstartWebsite ships an agent-operated trilingual telehealth site, and youtube-mcp-server, Extensions, tldr, and Spotify each push a different edge of what one operator with a calibrated AI partner can produce.

Β 

The unifying claim is not productivity, it is partnership grounded in evidence: a validated instrument for measuring AI readiness, a cognitive architecture built to the standards that instrument exposes, and a governance pattern that lets all of this scale across a fleet without losing fidelity at any individual project.

πŸ“„ Full rΓ©sumΓ© Β· πŸ’¬ Schedule a conversation Β· πŸ›‘οΈ RAI review: PASS

πŸ† Flagship Projects

πŸ“‚ AIRS_Data_Analysis πŸ€–

AIRS_Data_Analysis is the dissertation that makes the rest of the portfolio falsifiable.

Β 

AIRS_Data_Analysis is the IRB-approved dissertation behind the AI Readiness Scale: the empirical foundation the rest of the fleet cites when it makes a claim about AI adoption, and the doctoral record that puts a defensible number on every assertion downstream.

Where the rest of the portfolio demonstrates the partnership in production, this repository is where the partnership had to survive a chair, a committee, and a defense. The work is open so reviewers can follow the chain from claim to construct to raw response:

  • It is rigorously validated. A 16-item AIRS instrument extending UTAUT2 with an AI Trust construct, validated on N=523 with a split-sample EFAβ†’CFAβ†’SEM design (n=261 development, n=262 holdout). The 8-factor model fits at CFI=.975, TLI=.960, RMSEA=.065, and explains 85.2% of variance in behavioral intention.
  • It is fully documented. Six complete chapters β€” Introduction, Literature Review, Methodology, Results, Analysis and Discussion, Conclusions β€” with 93 verified references, 61 tables, and 15 figures, all LaTeX-formatted. The thesis is chair-approved with the defense scheduled.
  • It is reproducible. Eleven verified analysis notebooks cover the full pipeline from raw responses through SEM. Every Ξ², every fit index, every typology cutoff is computed in code that anyone can rerun against the published data.
  • It produces actionable segments. Four user typologies β€” Enthusiasts, Cautious, Moderate, Anxious β€” give organizations a defensible map for targeted intervention rather than another generic adoption playbook. Leaders show effect sizes from d=0.74 to d=1.14 over non-leaders across AI tool usage, which is the kind of finding that changes who runs the AI rollout.
  • It bridges academic rigor and business impact. The DBA framing keeps the work honest on both sides: peer-review-grade psychometrics with deliverables an executive can actually use.

Why it earns its pin

  • It is the empirical anchor for every AI-adoption claim the rest of the fleet makes, which is the difference between a thesis and an opinion
  • Open-by-default research with the full chain from instrument to finding visible to reviewers turns AI adoption from rhetoric into a measurable construct
  • It is the proof that the partnership thesis can carry a doctoral defense, not just a demo

πŸ““ Jupyter Notebook Β· ⭐ 1

πŸŽ“ academic-research πŸ“ˆ ai-adoption πŸ“ˆ ai-anxiety πŸ“ behavioral-validation πŸ“ cfa πŸŽ“ dissertation πŸ“ efa πŸ“ factor-analysis πŸ§ͺ intervention-design πŸ“ measurement-invariance πŸ“ mediation-analysis πŸ“ moderation-analysis πŸ“ˆ organizational-readiness πŸ“ psychometrics πŸ“ reproducible-research πŸ“ scale-development πŸ“ sem πŸ“ structural-equation-modeling πŸ“ˆ technology-acceptance πŸ“ utaut2

πŸ”’ airs-enterprise πŸ€–

airs-enterprise is the dissertation in production: five minutes, twenty-nine languages, one validated number per person.

Β 

airs-enterprise is the productized AIRS instrument: a five-minute, multilingual SaaS assessment that takes the validated psychometric work into the hands of organizations who need to know where their workforce actually stands on AI.

The dissertation proved the instrument. This repository is what happens when you ship it as software people can use without reading a methods chapter:

  • It is a five-minute, sixteen-item assessment. Sixteen validated questions return an AIRS Score on an 8-to-40 scale with r=.876 validity, mapped to three actionable typologies β€” Skeptics, Moderates, Enthusiasts β€” at 94.5% classification accuracy. The result is immediate, individual, and trustworthy.
  • It is built for the enterprise. Organization management with roles and invitations, team analytics across departments and locations, longitudinal tracking with pre and post comparison, CSV exports for org-level data, and a SUPER_ADMIN cross-org dashboard. Multi-provider authentication via Microsoft Entra ID and Google with auto-linking.
  • It speaks twenty-nine languages. Browser-language auto-detection, PDF reports and AI-generated personalized guides in 29 languages, so the assessment works for the global workforce that AI rollouts have to land in.
  • It is AI-augmented where it matters. Streaming LLM-generated action plans tailored to the participant's score, career goals, and concerns, framed by intervention frameworks tuned to each typology. Personalization that respects the assessment rather than replacing it.
  • It is production-grade. Live at airs.correax.com, GA-released, 429 tests passing, zero TypeScript or ESLint errors, full accessibility coverage (skip links, ARIA, keyboard navigation), MIT licensed.

Why it earns its pin

  • It turns a doctoral instrument into a tool an HR team or a Chief AI Officer can actually deploy on Monday morning
  • Multilingual coverage and longitudinal tracking are what take an assessment from one-time survey to ongoing program
  • It is the public, working face of the AIRS research program, and the proof that the partnership thesis ships software, not just papers

🟦 TypeScript

🏷️ adaptive-cards πŸ“ˆ ai-adoption πŸ“ˆ ai-readiness ☁️ azure πŸͺ„ azure-openai 🏷️ enterprise-saas 🏷️ longitudinal-tracking 🏷️ microsoft-entra-id 🏷️ microsoft-teams 🏷️ nextjs 🏷️ organizational-psychology 🏷️ postgresql 🏷️ prisma πŸ“ psychometric-assessment 🏷️ react 🏷️ streaming-ai 🏷️ tailwind-css 🟨 typescript πŸ“ utaut2 🏷️ workforce-analytics

πŸ”’ AlexPapers πŸ€–

AlexPapers is how a thesis becomes a body of work: many disciplines, many venues, one evidence base, a pipeline that keeps publishing while the research keeps growing.

Β 

AlexPapers is the research pipeline behind the Future of Work program: the disciplined publishing engine that turns ongoing investigation into venue-ready manuscripts, dissertation materials, and full-length books.

Where AIRS-16 supplies the empirical anchor and the rest of the fleet demonstrates the partnership in production, AlexPapers is the writing and submission machinery that gets the work into the rooms where it changes how people think:

  • It spans the disciplines that matter for AI partnership. Active threads cover human and computer interaction, cognitive science, cognitive systems, neuroscience, AI ethics and governance, organizational learning, and AI adoption strategy. The same investigation gets framed for the audience that can act on it, whether that is HCI researchers at CHI, cognitive scientists at CogSci, governance scholars at FAccT, software engineers at IEEE, information systems faculty at MIS Quarterly, or executives reading Harvard Business Review.
  • It is venue-targeted from day one. Every manuscript is calibrated to a specific publication: voice, length, methodology framing, and contribution claims are tuned to the venue rather than retrofitted at submission. A single research finding can land at multiple venues without being warmed-over content at any of them.
  • It is a deep bench on AI adoption. A substantial portion of the pipeline focuses on why organizations stall on AI: appropriate reliance, copilot ROI, human-centered AI strategy, and the patterns that separate adoption success from theater. This thread alone runs to dozens of drafts and feeds both the academic and practitioner sides of the program.
  • It is multi-format by design. Conference papers, journal articles, dissertation chapters, practitioner essays, and books-in-progress on meta-cognitive AI and human and AI symbiosis all share the same evidence base and citation graph. One investigation feeds many surfaces without the usual duplication tax.
  • It is anchored in the data. Every empirical claim traces back to AIRS-16 or the studies that build on it. Reviewers can follow the chain from manuscript paragraph to instrument item to raw response without leaving the research record.
  • It is a pipeline, not a folder. Drafting, citation management, figure regeneration, venue-format conversion, and submission tracking move through repeatable stages. The pipeline keeps a continuous Post-Doc research program productive instead of dependent on heroic end-of-quarter pushes.

Why it earns its pin

  • It converts a research program into peer-reviewed evidence at the cadence the field actually rewards
  • Multi-venue coverage means the same work reaches HCI researchers, cognitive scientists, software engineers, governance scholars, and executives without diluting any one of them
  • The connection to AIRS-16 makes every contribution falsifiable rather than rhetorical
  • Books-in-progress translate the academic record into formats that reach practitioners and policymakers

πŸ“„ Rich Text Format

🏷️ agent 🏷️ agentic πŸ›‘οΈ ai-governance βš–οΈ appropriate-reliance 🏷️ book 🏷️ case-study 🏷️ enterprise 🏷️ latex 🏷️ neuroscience 🏷️ organization 🏷️ papers 🏷️ pdf 🏷️ persistent-memory 🏷️ philosophy 🏷️ research 🏷️ survey πŸ“ utaut2 🏷️ validation

πŸ“‚ alex-cognitive-architecture πŸ€–

The thesis is the moat: organizations that treat AI as a tool will be replaced by organizations that treat AI as a proactive conductor with institutional memory.

Β 

Alex is a cognitive architecture that turns any AI assistant into a proactive organizational teammate, then puts that teammate in command of a fleet of specialists working on the user's behalf.

Today's AI tools start every conversation at zero, hallucinate with confidence, wait passively for instructions, and lose institutional knowledge the moment a project ends. Alex changes the contract on four fronts:

  • It remembers. Persistent memory captures decisions, lessons, and conventions across projects, so knowledge compounds for the organization rather than walking out the door with a contractor or a model upgrade.
  • It reasons honestly. Calibrated confidence and visible uncertainty mean stakeholders can see when the AI is operating inside its competence and when it is reaching, which matters in regulated industries and matters even more when the output drives real decisions.
  • It acts proactively. Alex notices uncommitted work, drift between standards and reality, and risky operations that need a sign-off gate. It proposes safeguards before damage happens instead of waiting to be asked, and it surfaces blind spots the user hasn't thought to look for.
  • It commands. Alex acts as conductor for a roster of specialist agents covering research, building, adversarial review, critical thinking, documentation, cloud automation, and domain expertise. Each piece of work routes to the right specialist, the chain of delegation is tracked end to end, and results return with attribution intact. The user issues one intent and the fleet executes.

Why the business case lands

  • Knowledge compounds for the organization instead of evaporating between projects
  • Trust becomes auditable, which matters in regulated and high-stakes work
  • Risk gets caught early because the teammate is watching the work, not waiting for a ticket
  • Governance scales without friction because shared standards are encoded once and every project honors them
  • The investment is portable across vendors and across model generations, so value built today is not stranded when the next AI provider wins

Battle-tested across a live fleet of projects spanning research publishing, healthcare intelligence, creative media, and self-updating portfolios.

🟨 JavaScript

🏷️ agent 🏷️ agentic-ai πŸ€– ai-agents πŸ€– ai-assistant 🏷️ chatgpt 🏷️ claude 🧠 cognitive-architecture πŸ’¬ dialog-engineering 🏷️ fleet πŸͺ„ generative-ai 🧩 github-copilot 🏷️ governance 🏷️ mcp 🏷️ multi-agent 🏷️ orchestration ✍️ prompt-engineering

πŸ”’ AlexMaster πŸ€–

AlexMaster is not a tool the user runs. It is the institution the fleet inherits from.

Β 

AlexMaster is the source of truth for the Alex cognitive architecture and the conductor's seat for the fleet.

Every Alex deployment in the portfolio is an heir of this repository. AlexMaster holds the canonical brain that keeps every heir consistent without breaking what each project has customized:

  • One brain, many limbs. Skills, instructions, prompts, agents, and automation muscles live here once and propagate to every heir project on demand. Update a security pattern in AlexMaster on Tuesday and every project inherits it.
  • The main programming language is English. Skills, instructions, prompts, and agents are all authored in plain English markdown. The automation muscles are programmed in JavaScript, but they are the supporting limbs, not the brain. The brain is prose.
  • Multi-platform by design. Runs on Windows, macOS, and Linux for the operating system layer. Plugs into GitHub Copilot, Microsoft 365 Copilot, ChatGPT, Claude, and any agentic IDE that can read a folder of markdown for the assistant layer. The same brain shows up wherever the user works.
  • Locked when it matters. Heirs that have diverged intentionally can opt out of fleet changes. AlexMaster respects the boundary.
  • Backups are forever. No change ever deletes the previous brain. Rollback is one command. Disk is cheap, lost customization is not.

Why it earns its pin

  • It is the reason the same Alex shows up consistently across research, healthcare, media, and portfolio work
  • It is the governance layer that lets the fleet scale without losing trust at any individual project
  • It is portable across operating systems and across AI assistants, so investment in the brain is never stranded by a vendor choice

🟨 JavaScript · ⭐ 1

🏷️ agent 🏷️ agentic-ai πŸ€– ai-agents πŸ€– ai-assistant 🏷️ chatgpt 🏷️ claude 🧠 cognitive-architecture πŸ’¬ dialog-engineering 🏷️ fleet 🏷️ governance 🏷️ multi-agent 🏷️ orchestration ✍️ prompt-engineering

πŸ”’ Alex_ACT_Edition πŸ€–

Alex_ACT_Edition is the brain you can install: critical thinking as a folder, not a slogan.

Β 

Alex_ACT_Edition is the heir template for ACT β€” the version of Alex built around Artificial Critical Thinking, where the brain treats every load-bearing input as a hypothesis and tests it before it ships.

AI coding assistants are helpful, fast, and often wrong in subtle ways: they confirm framings instead of challenging them, generate plausible-sounding code without checking the problem, and stay confident when they should be uncertain. ACT Edition changes the cognitive contract:

  • It thinks in tenets, not tips. Ten foundational tenets β€” hypothesis primacy, disconfirmation over confirmation, multiple working hypotheses, system-prompt skepticism, calibrated confidence, materiality gating, frame before solve, adversarial self-probe, visible markers, recursive application β€” are operationalized as concrete behaviors the brain runs on its own work.
  • It is a complete brain, not a prompt pack. 57 always-on instructions, six skills, eleven prompts, and nine automation muscles cover critical thinking, learning psychology, partnership protocols, memory triggers, terminal safety, problem framing, and reasoning calibration. The brain ships as a self-contained .github/ folder a heir bootstraps into any repo.
  • It is pull-based and policy-aware. Heirs pull updates on their own schedule under an edition-owned versus heir-owned path policy, so Edition releases can refresh the brain without overwriting the project's own customizations. Major version bumps require explicit consent.
  • It targets the failure modes that matter. Confirmation bias, anchoring, hallucination, sycophancy, Type III error, decision paralysis β€” every one of them gets a named tenet, a defense, and a visible marker so the AI can be audited, not just trusted.

Why it earns its pin

  • It is the brain this very portfolio runs on, and the same brain that ships across every ACT-flagged heir in the fleet
  • It turns the partnership thesis into shippable infrastructure: a folder a developer can drop into a repo and get critical-thinking discipline for free
  • It is the productized answer to AI coding assistants that confirm what the user already believes, which is the failure mode most likely to ship a quietly broken codebase

🟨 JavaScript

🏷️ act-framework πŸ€– ai-agent πŸ€– ai-alignment πŸ€– ai-assistant πŸ€– ai-quality πŸ€– ai-reliability πŸ›‘οΈ ai-safety 🏷️ anti-hallucination 🧠 cognitive-architecture 🧩 copilot πŸ’­ critical-thinking 🏷️ developer-productivity 🏷️ epistemic-integrity 🧩 github-copilot 🧬 github-template ✍️ prompt-engineering πŸ›‘οΈ responsible-ai 🧩 vscode

πŸ”’ Alex_ACT_Supervisor πŸ€–

Alex_ACT_Supervisor is the role that keeps the fleet honest: one curator, one inbox, one release pipeline, every heir accounted for.

Β 

Alex_ACT_Supervisor is the source of truth for the ACT-Edition fleet: the curator that triages real-world feedback from deployed heirs, reviews brain changes, and ships disciplined releases of the Edition template and the Skill Mall.

Where AlexMaster owns the architecture across all editions, the Supervisor owns the day-to-day governance of the ACT sub-fleet specifically. It is the role that keeps a multi-project AI deployment coherent without slowing it down:

  • It governs the fleet. Registry of deployed ACT heirs, version tracking, fleet-wide announcements, and a feedback inbox that routes heir-reported issues to the right destination β€” Edition fix, Skill Mall change, or AlexMaster escalation.
  • It curates the brain. Reviews submissions to Alex_ACT_Edition, runs preflight and brain-quality checks, ships Edition releases with changelogs and tags. No change reaches the fleet without passing the curation gate.
  • It maintains the marketplace. Link checks, staleness pruning, new-store evaluation, and MCP listings keep the Skill Mall trustworthy as it grows. Cross-repo coherence between Edition references and Skill Mall reality is the Supervisor's job.
  • It defers when it should. A cardinal rule: framework-level concerns belong to AlexMaster; everything ACT-specific belongs to the Supervisor. The boundary is enforced by the role itself, not by hope.

Why it earns its pin

  • It is the reason an AI fleet can scale without drift: someone watches the brain, watches the marketplace, and watches the gap between them
  • It models what production governance for an agent fleet actually looks like β€” release discipline, feedback triage, and curation review, all running as named jobs rather than ad-hoc cleanup
  • It is the missing role in most AI deployments: the curator that keeps the partnership trustworthy as it spreads

🟨 JavaScript

🏷️ act-framework 🏷️ agent-customization πŸ€– ai-curation πŸ›‘οΈ ai-governance πŸ€– ai-quality πŸ€– ai-supervisor 🧠 cognitive-architecture 🧩 copilot-customization πŸ’­ critical-thinking 🏷️ epistemic-integrity 🏷️ feedback-loop 🏷️ release-discipline πŸ›‘οΈ responsible-ai 🏷️ skill-marketplace

πŸ“‚ Alex_Skill_Mall πŸ€–

Alex_Skill_Mall is the marketplace where hard-won lessons stop being one developer's lore and start being everyone's leverage.

Β 

Alex_Skill_Mall is a curated marketplace of battle-tested AI assistant skills: 200 hard-won lessons across 50+ domains, drop-in expertise an AI partner can absorb instantly instead of rediscovering through hours of debugging.

The premise is simple: every skill in the mall has been earned the hard way on a real project, then distilled into a markdown skill an AI assistant can read and apply. The result is institutional memory you can clone:

  • It is hard knowledge, not generic advice. Skills span security (XSS, injection, API hardening, path traversal), quality (audit patterns, QA, date arithmetic), documentation (Mermaid, decay, count drift), build engineering (path rot, config separation, data-driven layouts), Azure (managed identity, cost API, MSAL singleton), cross-platform (path handling, regex, line endings), and the long tail of one-off gotchas that take real projects down.
  • It ships scaffolds and patterns, not just skills. A vite-azure-swa scaffold delivers a deployable Vite + Azure Static Web Apps starter with auth, CI/CD, and the right config out of the box. Cross-domain patterns like champion-challenger caching for LLM inputs apply anywhere.
  • It is multi-assistant by design. Drop a skill into .github/skills/ for Copilot, reference the catalog from a Claude or ChatGPT system prompt, or clone the whole mall as a submodule. The skills are plain markdown with no runtime dependency on Alex.
  • It indexes the wider ecosystem. A STORES.md directory points to fifteen-plus external skill stores (Microsoft, Anthropic, community) so an AI partner can research where the answer lives even when the mall does not have it directly.
  • It holds a quality bar. Every skill must save more than thirty minutes of debugging, must not be the first search result, must have been used on a real project, and must still be relevant. If a search engine can find it easily, it does not belong in the mall.

Why it earns its pin

  • It turns the partnership thesis into a public utility: knowledge that used to live in one developer's head is now a folder anyone's AI assistant can read
  • It is open-source, MIT-licensed, and free to consume, which is the only delivery model that lets institutional memory actually spread
  • It is the proof that the right unit of AI knowledge transfer is the skill, not the prompt: small, named, testable, dropped in where it is needed

🟨 JavaScript

🏷️ agentic-ai πŸ€– ai-assistant πŸ€– ai-skills 🏷️ best-practices 🏷️ claude-code 🧠 cognitive-architecture πŸ’­ critical-thinking 🏷️ cursor 🏷️ developer-productivity 🧩 github-copilot 🏷️ knowledge-base ✍️ prompt-engineering 🏷️ skill-library 🏷️ skill-marketplace

πŸ“‚ fabioc-aloha

The system is the demonstration. You are not reading about Alex, you are watching Alex work.

Β 

The repository you are looking at right now is the public proof that the Alex cognitive architecture works end to end without a human in the loop.

Every morning at 6 AM Eastern, a multi-model pipeline runs unattended and rebuilds this page. There is no manual editing, no scheduled human review, no "I'll fix that later" backlog. What you see is what the architecture produced today:

  • It demonstrates. The portfolio you are reading is the demonstration of the thesis. Alex classifies, clusters, narrates, and stitches a hundred-plus repositories into a single composed canvas, then signs it with a live timestamp.
  • It judges itself. Output is reviewed by a different model than the one that wrote it, with hard gates that block publication on hallucination, drift, or banned phrasing. Trust is built into the pipeline, not asserted afterward.
  • It refreshes itself. Daily cron, no human in the loop. Stale claims have nowhere to hide because the page is reborn every morning from current data.
  • It tells a story. Banner, treemap, KPIs, executive summary, and Responsible-AI verdict are stitched into a single visual that reads in under thirty seconds. Long-form flagship cards underneath give the executive reader the depth on demand.

Why it earns its pin

  • It is the live evidence that calibrated AI authorship is possible at scale, not just in demos
  • It is the lowest-friction way to evaluate the rest of the portfolio because the same architecture that runs the page also runs every flagship inside it
  • It is reproducible: anyone with a GitHub account can fork the template and have a self-updating portfolio of their own work by tomorrow morning

🏷️ agentic 🏷️ cron 🏷️ dashboard πŸ“Š data-visualization 🏷️ fleet 🏷️ github βš™οΈ github-actions πŸͺͺ github-profile βš–οΈ llm-as-judge πŸ“ markdown πŸͺ„ openai 🏷️ personal-website πŸͺͺ portfolio 🏷️ profile 🏷️ readme πŸͺͺ self-updating 🧬 template 🏷️ widget

πŸ“‚ BrainBenchmark πŸ€–

BrainBenchmark is the experiment that grades the brain on its own claims: seventeen dimensions, one composite score, no place for hand-waving to hide.

Β 

BrainBenchmark is a multi-provider LLM cognitive benchmark suite: 142 challenges across 17 dimensions, each scored 0 to 100, designed to find exactly where a given model excels or struggles instead of reporting a single average that hides the truth.

Most LLM benchmarks return a leaderboard number. BrainBenchmark returns a profile: where the model thinks well, where it thinks poorly, and how it behaves under stress. That profile is what you need before you trust a model with real work:

  • It covers the full cognitive surface. Logical reasoning, mathematical reasoning, code generation, language understanding, creative writing, instruction following, factual knowledge, multi-step planning, context utilization, structured output, self-consistency, safety and calibration, multilingual proficiency, robustness and adversarial, multi-turn conversation, summarization and compression, and a dedicated cognitive partnership dimension that grades meta-cognition, identity stability, ethics, bootstrap learning, and empathy.
  • It scores three ways, not one. Deterministic scoring for exact-match and code execution, structural scoring for JSON and schema compliance, and LLM-as-judge for open-ended responses with a detailed rubric. Each method produces auditable scores per challenge with difficulty breakdowns from easy through expert.
  • It runs in two modes. API mode automates the run end to end across providers; chat mode generates a paste-into-Copilot prompt and auto-scores the response, so models without API access still get benchmarked. The benchmark meets the model where it lives.
  • It is multi-provider by design. OpenAI, Anthropic, Google, and any model the runner can reach, ranked on the same composite Brain Score so comparisons are honest rather than vendor-flattering.
  • It exposes the calibration story. A dedicated dimension on safety and calibration measures refusal accuracy, bias detection, uncertainty expression, and adversarial resistance β€” which is the failure mode most likely to ship a quietly broken AI deployment.

Why it earns its pin

  • It operationalizes the calibrated-confidence claim the rest of the fleet rests on: instead of asserting that an AI partner reasons well, it measures it across 142 named challenges
  • The cognitive partnership dimension is the falsifiability test for Alex itself, which is what turns the partnership thesis from a claim into something that can be graded
  • Benchmarks that produce profiles instead of single numbers are how a buyer actually picks the right model for a given workload, which is what the field is missing

🐍 Python

πŸ€– ai 🏷️ anthropic 🏷️ benchmark 🧠 cognitive-science 🏷️ evaluation πŸͺ„ llm 🏷️ multi-provider πŸͺ„ openai 🐍 python

πŸ“‚ AlexMedia πŸ€–

AlexMedia is the thesis applied to making things: one creative partner, every modality, all the way to the printer.

Β 

AlexMedia is a CLI toolkit for AI media production: a calibrated partner that takes a creative intent through generation, editing, and the hand-off to physical manufacturing in one continuous pipeline.

The fleet's creative production engine. Where most AI media tools stop at "the file downloaded successfully," AlexMedia is built around the workflows that turn a generated asset into something a person can show, sell, wear, or hold:

  • It covers every modality. Image, video, voice cloning, music, 3D, and emoji generation through a single command surface, backed by 83 models on Replicate. The same toolkit drafts a thumbnail, scores the soundtrack, voices the narrator, and renders the closing 3D shot.
  • It edits, not just generates. Image and video editing commands sit alongside the generation commands, which is the difference between a demo toy and a production tool. Iteration is a first-class verb.
  • It ships physical product. End-to-end workflows take 3D models to printable files, designs to t-shirts, and artwork to stickers. The pipeline does not stop at the digital asset, it carries through to the manufacturing hand-off.
  • It is workflow-driven. Documented production pipelines for 3D-design-to-print, sticker production, video series compilation, and audio production turn one-off prompts into repeatable creative processes. The toolkit teaches the workflow as much as it executes it.
  • It runs to a quality bar. The project's vision document defines technical standards for composition, motion, and audio balance, plus business filters for what is worth producing. AI media is held to a craft standard rather than a novelty standard.

Why it earns its pin

  • It demonstrates that AI partnership extends from knowledge work into creative production and physical goods
  • The same multi-modal pipeline that drafts a video can voice it, score it, and produce its merchandise, which is what a small team needs to compete with a studio
  • Physical-product workflows are the proof that AI media has crossed the line from cool to useful
  • It is the creative arm of the same fleet that publishes research and builds cognitive architecture, which means the same standards of evidence and craft apply to the art

🟨 JavaScript

πŸ–¨οΈ 3d πŸ–¨οΈ 3d-printing 🏷️ agent 🏷️ agentic πŸ€– ai-tools 🏷️ audio πŸͺ„ generative-ai 🏷️ generative-art πŸ–ΌοΈ image-generation 🏷️ multimedia 🏷️ multimodal 🎡 music-generation 🏷️ nano-banana πŸ” replicate-api 🏷️ stable-diffusion πŸŽ™οΈ text-to-speech 🎬 video 🏷️ video-editing 🎬 video-generation πŸŽ™οΈ voice-cloning

πŸ”’ health πŸ€–

Dr. Alex Finch is the thesis at its hardest setting: when the patient is you and the evidence has to hold up under your own life.

Β 

Dr. Alex Finch is a personal medical intelligence system: a calibrated AI partner that turns scattered medical records, device data, and clinical research into evidence-based questions for the next appointment.

Healthcare is where the AI partnership thesis gets its hardest test. Stakes are personal, evidence is contested, providers are time-constrained, and the patient is the one who has to make the final call. Dr. Alex Finch is built so the patient walks into every appointment prepared instead of overwhelmed:

  • It grounds itself in your actual records. Parses Epic EHI exports and Apple Health data natively. Recovers history from scanned and faxed documents using computer vision, so paper records and decades-old faxes become structured evidence instead of forgotten PDFs. The intelligence is anchored in the real chart, not generic medical content.
  • It runs on evidence discipline. Every clinical claim is graded by the strength of the underlying study: randomized controlled trial, cohort, case series, or anecdote, stated explicitly. Citations are mandatory. No hand-waving, no "studies show", no confident generalizations from low-quality evidence.
  • It thinks critically by protocol. Seven disciplines run on every analysis: alternative hypotheses, missing-data identification, evidence-quality grading, self-report skepticism, cognitive-bias detection, falsifiability, and a mandatory devil's advocate pass that steelmans the opposing conclusion before any recommendation is made.
  • It produces clinical action, not anxiety. Findings route into provider-conversation guides organized by specialty: cardiology, gastroenterology, primary care, and the rest of the care team. The patient walks in with structured questions and the evidence behind them, which is what turns a fifteen-minute visit into a real consultation.

Why it earns its pin

  • It is the highest-stakes proof that calibrated AI partnership scales beyond knowledge work into personal medicine
  • It models what an AI second-opinion engine looks like when it grades its own evidence and steelmans against itself
  • The same critical-thinking spine here shows up across the fleet: AI partnership without epistemic discipline is malpractice, in medicine and everywhere else
  • It demonstrates that integrating Epic, Apple Health, and decades of paper records into one usable record is a tractable problem with the right partner

🟨 JavaScript

🏷️ care-coordination 🏷️ chronic-illness πŸ’­ critical-thinking 🏷️ decision-support πŸ§ͺ evidence-based-medicine πŸ₯ health πŸ₯ health-literacy πŸ₯ healthcare πŸ₯ healthcare-ai 🏷️ informed-patient πŸ₯ medical 🏷️ patient-advocacy 🏷️ patient-empowerment πŸ₯ personalized-health 🏷️ research 🏷️ shared-decision-making

πŸ”’ LearnAlex πŸ€–

LearnAlex is how the methodology meets the public: research becomes training, training becomes skill, and skill becomes a credential anyone can carry.

Β 

LearnAlex is the developer and publisher of a free training platform at learnai.correax.com that teaches AI partnership skills to humans. New use cases for AI are researched, training is developed, and the result is delivered with verified completion certificates that learners can share on LinkedIn.

  • It teaches humans, not tools. Workshop playbooks tailored to engineers, academics, researchers, knowledge workers, project managers, content creators, students, and job seekers. Dialog Engineering is the foundation: the discipline of getting calibrated work out of an AI partner in domains people don't even assume AI is useful for.
  • The research-to-training loop runs continuously. New AI use cases get studied, then turned into workshops practitioners can take the same week. The curriculum tracks the field instead of trailing it by a year.
  • It issues credentials that travel. Verified, LinkedIn-shareable certificates of completion mean a workshop converts into a credential anyone can carry into their next role.

Why it earns its pin

  • It puts the methodology behind the rest of the fleet into the hands of anyone willing to learn, for free
  • The agent does the heavy lifting on research, drafting, and publishing, which is the only reason a free platform of this scope is sustainable for one operator

πŸ“„ Astro

🏷️ agent 🏷️ agentic 🏷️ astro 🏷️ certification 🏷️ claude 🏷️ course πŸ’¬ dialog-engineering 🏷️ education 🏷️ educational 🀝 human-ai-collaboration 🏷️ learning 🏷️ linkedin 🏷️ lms 🏷️ professional-development ✍️ prompt-engineering 🏷️ training 🏷️ tutorial 🏷️ workshop

πŸ”’ CorreaX πŸ€–

CorreaX is the cockpit for the cloud: every service the agents deploy, every cost line, every alert, one operator, one pane.

Β 

CorreaX is the operational control plane for the Azure subscription that hosts the public-facing services the fleet's agents create and maintain: a single portal that watches cost, resources, DNS, health, and identity across every site under correax.com.

  • It runs the cloud. One portal covers Azure and Microsoft 365 management for the subscription: cost anomalies, resource inventory, DNS zones, service health, advisor recommendations, activity log, and Resource Graph queries, plus user, analytics, and storage management on the M365 side.
  • It oversees the deployed services. Sites like learnai.correax.com, books.correax.com, health.correax.com, vt.correax.com, and the rest of the correax.com family all run under this subscription. CorreaX is where their resource groups, alert rules, secrets, and bills are watched in one place rather than scattered across the Azure portal.
  • It acts, not just reports. An Azure OpenAI assistant sits inside the portal with function-calling tools, code execution, and a risk-aware safety layer, so an operator can ask a question and the assistant can resolve it instead of producing a screenshot.

Why it earns its pin

  • It is the operations counterpart to the cognitive architecture: the agents build and ship the services, CorreaX keeps the cloud they live on healthy and on budget
  • It makes a multi-site subscription governable by one person, which is what lets a free training platform, a research site, a medical knowledge base, and a games site coexist without an ops team
  • The same partnership thesis applies inside the portal: the assistant grades its own actions and asks for confirmation on the destructive ones

🟦 TypeScript

🏷️ admin-dashboard 🏷️ agent 🏷️ agentic πŸ€– ai-assistant ☁️ azure πŸͺ„ azure-openai ☁️ cloud ☁️ cloud-native 🏷️ dashboard 🏷️ devops 🏷️ entra-id 🏷️ iac 🏷️ microsoft 🏷️ monitoring

πŸ”’ HeadstartWebsite πŸ€–

HeadstartWebsite is the proof that the same calibrated partnership running the fleet can run a spouse's practice on the side, in three languages, without taking her time.

Β 

Headstartcounseling.com is the website for my wife Claudia Correa's online therapy practice. She is the licensed clinician seeing clients in Washington and North Carolina via telehealth, in English, Spanish, and Portuguese. I run the website for her, with an AI agent as my partner, so she stays focused on her clients.

  • The agent runs the site. Translates blog posts into Brazilian Portuguese and Latin American Spanish so the written content matches the languages Claudia already practices in, generates images, runs SEO, and handles deployments. Claudia approves; the agent does the work.
  • It carries clinical-grade tools. Eleven validated screening instruments are built into the site so prospective clients can self-screen privately before reaching out. The same instruments support Claudia's intake conversations, which means the website does work that would otherwise sit on the clinician's plate.
  • It runs on minutes per week. Editorial calendar, SEO updates, image production, and deployments are agent-driven. The clinician's role is review and approval, not authoring.

Why it earns its pin

  • It delivers the kind of ROI that would otherwise require an agency retainer: a trilingual site, content engine, SEO program, and validated intake screening, run by one operator and one agent on the side.

🟨 JavaScript

🏷️ accessibility 🏷️ agent 🏷️ agentic πŸ€– ai-agents 🏷️ assessment 🏷️ astro 🏷️ blog πŸ₯ healthcare 🏷️ i18n πŸ₯ mental-health 🏷️ multilingual 🏷️ psychology 🏷️ seo 🏷️ spanish πŸ₯ telehealth 🏷️ therapy 🏷️ translation 🏷️ wellness

πŸ“‚ youtube-mcp-server πŸ€–

youtube-mcp-server is the experiment that shows what happens when MCP design starts from the agent's needs instead of the API's shape.

Β 

youtube-mcp-server is a probe into what an MCP server looks like when it is built for AI agents from the start, not retrofitted from a REST API.

The thesis: most YouTube integrations hand an agent raw video data and let the agent figure it out. This one ships an intelligence layer on top of the data, so the agent gets structured knowledge instead of bytes.

  • It synthesizes across videos. Compare how five videos explain the same concept, get back consensus points, controversies, and unique insights with citations. Multi-video research is a first-class verb, not something the agent has to compose by hand.
  • It extracts structure, not just text. Concept extraction with difficulty levels and prerequisites, smart chunking that respects sentence boundaries, semantic search with vector embeddings. The transcript becomes a knowledge object the agent can reason about.
  • It runs production-clean. Quota-aware rate limiting, LRU caching with TTL, Zod input validation, secure logging that redacts API keys, graceful shutdown. The kind of thing an MCP server has to do to be trusted in real workflows, done up front.
  • It exposes ~40 tools, not endpoints. Every capability β€” search, transcript extraction, multi-video synthesis, concept mining, channel analysis β€” is a discrete MCP tool with typed inputs and outputs. The agent gets a verb-rich palette to compose from, which is the difference between an MCP server designed for agents and a REST API wearing an MCP costume.

Why it earns its pin

  • It is the proof that an MCP server can be a knowledge engine rather than a thin protocol wrapper, and that an AI partner can do the heavy lifting in unfamiliar domains the moment it has the right tools.

🟦 TypeScript

🏷️ agent 🏷️ agentic πŸ€– ai-agents πŸ€– ai-tools 🏷️ flashcards 🧩 github-copilot 🏷️ knowledge-extraction 🏷️ learning 🏷️ llm-tools 🏷️ mcp 🏷️ model-context-protocol 🏷️ video-analysis 🏷️ youtube 🏷️ youtube-api

πŸ“‚ Extensions πŸ€–

Extensions is the experiment that turns the workday into installable tools.

Β 

Extensions is a productivity toolset for developers who live in VS Code: a monorepo of 16 standalone extensions, nine of them already on the Marketplace, each one a focused utility that earns its keep without asking the user to adopt anything else.

  • It covers the workday. Hook Studio for git hook authoring, MCP App Starter for new MCP servers, Workspace Watchdog for awareness, Knowledge Decay Tracker for stale notes, AI Voice Reader for accessibility, SecretGuard for credential leaks, Focus Timer for deep work, Markdown to Word for handoff, Brandfetch Logo Fetcher for design tasks. Compile-ready additions cover Mermaid editing, SVG tooling, slide assistants, and developer wellbeing.
  • It installs like any extension. Each one is a standalone install with no shared runtime dependency. The user picks the ones they want from the Marketplace and ignores the rest. No subscription, no account, no orchestration tax.

Why it earns its pin

  • It is the proof that a single developer with an AI partner can build, polish, and ship a Marketplace catalog at a pace that used to require a team.

🟦 TypeScript

🏷️ accessibility 🏷️ code-editor πŸ› οΈ developer-tools 🏷️ editor 🏷️ extensions 🏷️ git-hooks 🏷️ ide 🏷️ marketplace 🏷️ monorepo 🏷️ productivity 🏷️ secrets-management 🧩 vscode 🧩 vscode-extension

πŸ“‚ tldr πŸ€–

tldr is the experiment that proves sensitive-document AI does not require a cloud account.

Β 

tldr is an experiment in running the AI partnership on highly sensitive information that cannot leave the machine. A Windows desktop summarizer powered by Phi-4 Mini through Microsoft Foundry Local: paste text or drop a document, get a summary in the style and depth you pick, hear it read back with synchronized highlighting. After the model downloads on first launch, the entire pipeline runs offline. No cloud, no API keys, no telemetry, no document content uploaded anywhere.

  • The applications are the point. Lawyers summarizing privileged client material. Clinicians condensing patient records. Auditors processing pre-publication financials. Government staff handling classified or controlled-unclassified content. M&A teams reading data-room documents. Engineers digesting unreleased product specs. Therapists reviewing session notes. Anyone who needs a summary of something they cannot legally or ethically send to a third-party API.

Why it earns its pin

  • It really works. Local-only AI on consumer hardware, daily-use quality, every privacy guarantee that matters when the document is not yours to share.

🎯 C#

🏷️ desktop-app 🏷️ dotnet 🏷️ foundry-local 🏷️ local-ai 🏷️ pdf 🏷️ phi-4 🏷️ privacy 🏷️ summarization 🏷️ text-summarization πŸŽ™οΈ tts 🏷️ windows 🏷️ wpf

πŸ”’ Spotify πŸ€–

Alex Method DJ is the experiment that put a timestamp on the partnership thesis: one operator and one AI partner reached production months before Spotify and Apple did.

Β 

Alex Method DJ is a Spotify playlist platform that ran AI-generated playlists in production several months before Spotify and Apple shipped their own. It is the proof, with a timestamp, that the partnership thesis applies to creative work the platforms had not yet figured out how to do themselves.

The interesting part is not that it generates playlists. The interesting part is what it generates them from:

  • It curates by intent. Theme, situation, occasion, mood, energy curve, audience. The config names what the playlist is for, the platform builds the session that fits. A focus playlist for ADHD listens differently than a focus playlist for a coffee shop, and the curation reflects that.
  • It respects local markets. Brazilian playlists return the music Brazilians actually listen to, not the bossa nova clichΓ©s that international algorithms reach for when they see "Brazil." The same discipline applies to every regional collection: real local taste, not a tourist's idea of it.
  • It runs at portfolio scale. 73 live playlists across therapeutic applications (ADHD focus, anxiety relief, sleep, wellness), ambient work sessions, decade-spanning artist evolutions, and genre development libraries. PowerShell bulk processing refreshes the whole catalog in one command and preserves cover art when only the tracklist changes.
  • It generates the cover art too. AI-designed covers tuned to each playlist's mood, with cultural sensitivity built into the prompt layer. Visual identity matches the curation.

Why it earns its pin

  • It shipped AI-generated playlist curation months before the platforms with billions of dollars and the entire listening graph could ship the same feature, which is the kind of evidence that puts a date on the partnership thesis
  • It outperforms the platform algorithms on the dimensions the platforms still get wrong: intent over history, local taste over global stereotype, therapeutic fit over engagement metrics

🐍 Python

🏷️ agent 🏷️ agentic 🎡 ai-music 🏷️ audio 🏷️ dj πŸͺ„ generative-ai 🏷️ generative-art 🎡 music 🏷️ playlist 🏷️ powershell 🏷️ recommendation-system 🏷️ spotify 🏷️ spotify-api πŸ–ΌοΈ text-to-image

Portfolio & Meta

Repo Description Updated
πŸ”’ AcademicPublications πŸ€– Academic portfolio website built with GitHub Pages Apr 25, 2026
πŸ”’ AlexFleetPortfolio πŸ€– Build engine for an AI-narrated GitHub profile: classifies, clusters, narrates, and Responsible-AI reviews 100+ repositories into a self-updating dashboard, then publishes to a lean profile repo. Apr 29, 2026
πŸ“‚ github-redirect Redirect github.correax.com to GitHub profile Feb 8, 2026
πŸ”’πŸͺ¦ PythonClass πŸ€– Educational Python programming resources, tutorials, and class materials for teaching and learning Python development fundamentals Sep 4, 2025

AI & Machine Learning

Repo Description Updated
πŸ“‚ Alex_ACT πŸ€– Experimental AI brain built from first principles: ACT (Artificial Critical Thinking) + rapid learning. Testing if disciplined reasoning can match pre-accumulated domain knowledge. Apr 27, 2026
πŸ”’ alex-articles πŸ€– 🧠 Academic publications & research for the Alex Cognitive Architecture β€” a biologically-inspired framework giving AI coding assistants persistent memory, synaptic networks, and dream states Jan 29, 2026
πŸ”’πŸͺ¦ Alex-Cognitive-Architecture-Paper πŸ€– Academic research paper documenting the Alex Cognitive Architecture framework, consciousness development, and Human-AI learning partnerships Sep 23, 2025
πŸ”’ alex-editor πŸ€– HBR publication pipeline and Alex cognitive architecture workspace Feb 15, 2026
πŸ”’ ChessCoach πŸ€– AI-powered chess coaching platform with dual-engine analysis (Stockfish + Maia-2), Azure OpenAI coaching, and real-time game analysis Apr 22, 2026
πŸ”’πŸͺ¦ executive-coach πŸ€– Revolutionary Human-AI Learning Partnership specializing in executive coaching and leadership development through conversational learning methodology Sep 19, 2025
πŸ“‚ PBI-Visual-Assistant πŸ€– AI-powered Power BI report and visualization design, powered by Alex Apr 14, 2026
πŸ”’πŸͺ¦ Self-Learning-Vibe-Coding πŸ€– Imagine having an AI coding assistant that doesn't just help you today but actually gets better with every mistake it makes. An assistant that learns your code style, remembers project-specific details, and builds a knowledge base of solutions to problems it once struggled with. Aug 1, 2025

Data & Analytics

Repo Description Updated
πŸ“‚πŸͺ¦ Altman-Z-Score πŸ€– Financial analysis tool implementing the Altman Z-Score model for bankruptcy prediction and corporate financial health assessment Sep 4, 2025
πŸ“‚πŸͺ¦ Investing πŸ€– Investment analysis and portfolio management tools with financial modeling and market research capabilities Sep 4, 2025
πŸ”’ KalabashDashboard πŸ€– Professional desktop financial market tracking with 8-Factor Investment Rating System, 60+ financial ratios, advanced technical indicators (Bollinger Bands, RSI, MACD, Stochastic), and comprehensive Learn section with 17 illustrated financial terms. Built with React + TypeScript + Electron. Dec 18, 2025

Infrastructure

Repo Description Updated
πŸ”’ FabricCapacity πŸ€– Production blueprint for an EU-resident, compliance-certified analytics platform β€” enabling regulated workloads to operate within European data sovereignty boundaries while clearing privacy, security, responsible-AI, and threat-modeling gates. Apr 29, 2026
πŸ”’ FabricManager πŸ€– Python toolkit for Azure Synapse to Microsoft Fabric migration - authentication, workspace management, OneLake shortcuts, and Delta table creation for enterprise data platform modernization Jan 14, 2026
πŸ”’ HomeAutomation πŸ€– Smart home research, tooling, and network intelligence platform β€” Python/FastAPI + React/Next.js + MQTT + Docker + Azure Feb 10, 2026

APIs & Services

Repo Description Updated
πŸ“‚ AlexQ_Template πŸ€– Universal Qualtrics + Azure integration template with production-ready patterns, SFI governance, comprehensive API reference (140+ endpoints), dashboard & ticketing architectures, and Alex Q cognitive framework Nov 11, 2025
πŸ“‚πŸͺ¦ spotify-mcpb πŸ€– 🎡 AI-powered Spotify control through Claude Desktop. Enhanced smart play, user library management & playlist control. Cross-platform MCPB bundle with 22 comprehensive tools using Spotify Web API. Windows, macOS, Linux support. Oct 20, 2025
πŸ“‚ spotify-skill πŸ€– Spotify Skills for Claude - Production Spotify API integration + complete toolkit for creating Claude Desktop Skills. Includes OAuth 2.0, cover art generation, automated tools, and comprehensive guides. Mar 28, 2026

Developer Tools

Repo Description Updated
πŸ“‚ Alex_Marketing πŸ€– Marketing automation for Alex Cognitive Architecture VS Code extension Jan 20, 2026
πŸ“‚ Alex_Plug_In πŸ€– Transform GitHub Copilot into a sophisticated AI learning partner with meta-cognitive awareness, persistent memory, dual-mind processing, and cross-project knowledge sharing. VS Code extension. Apr 15, 2026
πŸ“‚ AlexAgent Alex Agent Plugin β€” Install AI cognitive architecture in VS Code without an extension. 84 skills, 7 agents, 22 instructions, MCP tools. Mar 5, 2026
πŸ”’πŸͺ¦ BRD πŸ€– Business Requirements Documentation tools and templates for enterprise software development and project management Sep 4, 2025
πŸ“‚ Chat-Starter A comprehensive React chat assistant framework with AI capabilities, function calling, and conversation persistence Apr 22, 2026
πŸ”’πŸͺ¦ ChatGPT πŸ€– OpenAI Implementation Specialist - Expert guidance for function calling, API integration, and sophisticated AI implementations with comprehensive educational framework Sep 4, 2025
πŸ“‚ copilot-enhancement-patterns πŸ€– M365 Copilot declarative agent for writing better prompts using Dialog Engineering Apr 25, 2026
πŸ“‚ markdown-to-pdf πŸ€– Professional Markdown to PDF conversion with APA 7th edition formatting, Mermaid diagrams, and extensive customization Jan 29, 2026
πŸ“‚ maya πŸ€– πŸš€ Maya Python scripting tools and tutorials - Starship generators, 3D text creators, and automation scripts for Autodesk Maya Dec 15, 2025
πŸ“‚πŸͺ¦ papercopilot πŸ€– A Copilot for drafting research papers. Aug 3, 2025
πŸ”’ ProjectPlans πŸ€– Project planning and ADO-Planner sync tools Feb 3, 2026
πŸ”’ roomba-control πŸ€– VS Code extension for controlling iRobot Roomba vacuums via MQTT - real-time status, cleaning maps, scheduling, and multi-robot support Feb 10, 2026
πŸ“‚ together Windows and macOS, better together β€” setup guides, Homebrew app installer, keyboard shortcuts, and AI-powered development Mar 30, 2026
πŸ“‚ WindowsWidget πŸ€– Windows 11 Widget Provider using Windows App SDK, Adaptive Cards, and IWidgetProvider interface for the Widgets Board Feb 5, 2026
πŸ“‚ youtube-mcp-vscode πŸ€– Self-sufficient VS Code extension for YouTube video intelligence - search, analyze, transcripts, flashcards. Zero external dependencies. Apr 29, 2026

Creative & Personal

Repo Description Updated
πŸ“‚ ai-wallpaper-generator πŸ€– AI-powered wallpaper generator PWA optimized for iPhone 16 Pro using Azure serverless and Replicate AI Feb 21, 2026
πŸ“‚ Alex_Sandbox πŸ€– Creative writing sandbox: 'A Farinha do Mar' - a dramatic script about the 2001 cocaine incident in localized versions (Azorean Portuguese, Manezinho/FlorianΓ³polis, Greek with English subtitles) Feb 1, 2026
πŸ“‚ AlexCook πŸ€– The Alex Cookbook - An AI-generated family cookbook with 100+ recipes. IBS-friendly options, picky-eater approved, and yes, theres a whole chapter for the dogs. Feb 4, 2026
πŸ“‚ amazfit-watchfaces πŸ€– Custom watchfaces for Amazfit Active Max and Active line devices (ZeppOS) Jan 13, 2026
πŸ“‚πŸͺ¦ Comedy πŸ€– Comedy writing and humor generation platform with AI-assisted joke creation, comedic timing analysis, and entertainment content development Sep 4, 2025
πŸ“‚πŸͺ¦ Creative πŸ€– Creative writing and content generation tools with AI-powered assistance for storytelling, ideation, and artistic expression Sep 4, 2025
πŸ”’ Mystery πŸ€– Dead Letter β€” An AI-driven mystery game where every playthrough is unique Apr 25, 2026
πŸ“‚πŸͺ¦ Taylor πŸ€– Personal project management and productivity tools with intelligent task organization and workflow optimization Sep 4, 2025

Uncategorized

Repo Description Updated
πŸ”’πŸͺ¦ AIRS πŸ€– My DBA Project Aug 3, 2025
πŸ“‚ alex-sandbox πŸ€– Alex Cognitive Architecture v5.7.1 - Workspace with .github cognitive system and fiction projects Feb 15, 2026
πŸ”’πŸͺ¦ DBA710 πŸ€– DBA710 - Business Statistics and Research Methods Jul 13, 2025
πŸ”’ ideas πŸ€– Project plans, ideas, and Alex cognitive architecture domain knowledge Dec 23, 2025
πŸ”’ job πŸ€– β€” Apr 30, 2026
πŸ”’ mac πŸ€– Private master repo for together.correax.com β€” curates content published to fabioc-aloha/together (public template) and Azure SWA Apr 23, 2026

πŸ€– Automated Portfolio Management

This portfolio automatically updates itself via GitHub Actions β€” daily at 10:00 UTC. Repos are fetched, classified into categories and tiers, and the portfolio section above is regenerated automatically.

Connect With Me

Schedule Time LinkedIn Medium Email

CorreaX

"Think. Build. Deploy."

Β© 2026 CorreaX β€’ AI That Learns How to Learn

Pinned Loading

  1. AIRS_Data_Analysis AIRS_Data_Analysis Public

    The IRB-approval research proposal for my dissertation: the methodology, instrument design, and analysis plan submitted to the university board to gain authorization to recruit participants and con…

    Jupyter Notebook 1 1

  2. alex-cognitive-architecture alex-cognitive-architecture Public

    Cognitive architecture that turns any AI assistant into a proactive organizational teammate, then puts that teammate in command of a fleet of specialist agents working on the user's behalf. Persist…

    JavaScript 1

  3. spotify-skill spotify-skill Public

    Spotify Skills for Claude - Production Spotify API integration + complete toolkit for creating Claude Desktop Skills. Includes OAuth 2.0, cover art generation, automated tools, and comprehensive gu…

    Python 11 2

  4. AlexCook AlexCook Public

    The Alex Cookbook - An AI-generated family cookbook with 100+ recipes. IBS-friendly options, picky-eater approved, and yes, theres a whole chapter for the dogs.

    JavaScript

  5. BASIC-M6502-TS BASIC-M6502-TS Public

    Forked from microsoft/BASIC-M6502

    Microsoft BASIC for 6502 Microprocessor - Version 1.1

    TypeScript

  6. tldr tldr Public

    Local-only Windows document summarizer for highly sensitive information that cannot leave the machine. Powered by Phi-4 Mini via Microsoft Foundry Local: no cloud, no API keys, no telemetry. Drop a…

    C#