Skip to content

Instantly share code, notes, and snippets.

View jarencudilla's full-sized avatar
🏠
Working from home

Jaren Cudilla jarencudilla

🏠
Working from home
View GitHub Profile
@jarencudilla
jarencudilla / ai-prompts-qa-testing-real-workflows.md
Created December 10, 2025 12:43
How to make AI useful for testing by treating it like a supervised junior QA.

AI for QA: Prompts That Generate Real Test Value

Published on QAJourney.net

AI isn't a magic test generator—it mirrors whatever context it's given. With requirements, acceptance criteria, and supervision, AI expands coverage fast. Without it, you get fluff.

The Real Problem It Solves

  • AI generates shallow/happy-path cases by default
  • Lacks UX judgement + business logic awareness
  • Blind automation amplifies mistakes
@jarencudilla
jarencudilla / guerrilla-qa-shift-left-real-world.md
Created December 10, 2025 12:42
How low-budget QA outperforms enterprise setups through constraint-driven prioritization.

Guerrilla QA: Shift-Left Testing for the Real World

Published on QAJourney.net

Not every team has devices, tools, or time. Guerrilla QA succeeds by prioritizing what breaks first, not what looks thorough on paper. Fewer documents, faster feedback, more real coverage.

The Real Problem It Solves

  • Testers wait for resources instead of executing
  • Enterprise-style process slows small teams
  • Feature velocity outpaces documentation
@jarencudilla
jarencudilla / qa-methodology-vs-test-cases.md
Created December 10, 2025 12:39
Why QA fails when test cases lack depth. Coverage, not methodology, determines quality.

QA Methodology vs Test Cases: Coverage Is the Real Failure Point

Published on QAJourney.net

Teams often blame Agile/Waterfall/Hybrid models when bugs still leak into production—but the real issue is shallow test case design. Weak suites create false confidence, duplicated effort, and regression cycles that don't actually protect releases.

The Real Problem It Solves

  • Methodology changes don’t fix missing test depth
  • Suites pass while production still breaks
  • Duplicate or shallow cases inflate coverage falsely
  • Regression becomes ritual instead of protection
@jarencudilla
jarencudilla / website-annotation-tools-qa-uat.md
Created November 15, 2025 06:25
QAJourney.net — Why annotation tools cut QA and UAT waste by removing ambiguity from bug reporting. Faster handoffs, fewer clarification loops, clearer defect context.

Annotation Tools: The Fix for Sloppy Bug Reporting

Published on QAJourney.net

Most teams lose hours explaining screenshots instead of fixing bugs. Annotation tools erase that drag by giving developers exact visual context, browser details, and pinpointed failure zones. Less guessing, fewer Slack threads, and zero “where is this?” follow-ups. It’s not about pretty screenshots — it’s about eliminating ambiguity from defect communication.

The Real Problem They Solve

  • Screenshots that lack context or environment data
  • Missing reproduction clarity leading to back-and-forth messages
  • Bugs misunderstood because of unclear indicators
@jarencudilla
jarencudilla / qa-villain-perception-team-dynamics.md
Created November 15, 2025 06:22
QAJourney.net — Why QA gets labeled the villain: structural failures, upstream shortcuts, and broken team workflows that dump the blame downstream. Clear breakdown of perception, ownership, and sprint tension patterns.

QA Villain Perception: How Teams Misread the Work

Published on QAJourney.net

Teams love the idea of “velocity” until QA shows the real cost of rushing work upstream. That’s when the villain mask gets thrown at the testers — not because QA caused delays, but because QA exposes where the delay was already created. The frustration isn’t personal. It’s structural: uneven workload distribution, undefined acceptance criteria, and tasks piling at the tail end of the sprint.

Where Perception Breaks

  • QA is pulled in only after development runs late
  • Requirements shift while testing is already in progress
  • Defects get treated as “QA blocking the sprint”
@jarencudilla
jarencudilla / teaching-llms-data-evaluation-prompt-engineering.md
Created October 21, 2025 03:41
EngineeredAI.net - Prompt-engineering isn’t just writing better questions, it’s training LLMs to evaluate your data as QA testers do. Here’s how.

Teaching LLMs Data Evaluation via Prompt Engineering

Premise:
LLMs don’t “understand” data — they approximate patterns. We taught them how to grade datasets instead of hallucinating conclusions.

Why it matters:
Every AI team hits the “data blindness” wall. Prompt engineering can simulate reasoning — if structured like QA logic.

Ignore:
Over-complicated “autonomous agent” frameworks.

@jarencudilla
jarencudilla / building-eai-anti-bs-filter-wordpress-spam-plugin.md
Created October 21, 2025 03:33
EngineeredAI.net - A WordPress plugin built to detect AI-generated spam by tone and intent not just keywords. Because AI spam won’t stop and neither should your filter.

Building the EAI Anti-BS Filter: A WordPress Spam Plugin

Goal:
Catch AI-generated spam and SEO sludge before moderation fatigue sets in.

Build:
A plugin that scores comment intent and entropy — detects hollow AI fluff in seconds. If it sounds “too polite,” it’s filtered.

Why it matters:
AI spam floods WordPress faster than basic filters catch it. This one spots tone patterns, not just keywords.

@jarencudilla
jarencudilla / learning-game-development-ai-custom-404-games.md
Created October 21, 2025 03:24
EngineeredAI.net - Why let a 404 dead-end user? This case study uses AI to turn error pages into micro-games that teach and engage not frustrate.

Learning Game Development with AI: Custom 404 Games

Premise:
If a user lands on a 404, teach them something instead of apologizing.

Build:
Used AI to generate playable micro-games on 404 pages — logic driven, not gimmicks.
Every failure page becomes a lightweight tutorial or sandbox.

Why it works:

@jarencudilla
jarencudilla / claude-45-vs-claude-4-content-gauntlet.md
Last active October 21, 2025 03:23
EngineeredAI.net — Live test: Claude 4.5 vs Claude 4 under a production-content pipeline. Who survives? What breaks? The data speaks.

Claude 4.5 vs Claude 4 — Content Gauntlet

The real test: Can Claude 4.5 survive production-grade content loops without self-contradiction?

Findings:

  • 4.5 handles memory compression better.
  • 4.0 still wins on structured consistency.
  • Both stumble under recursive reasoning loads.

Why it matters:

@jarencudilla
jarencudilla / ai-prompts-qa-testing-real-workflow-tldr.md
Created October 21, 2025 02:50
QAjourney.net — Real-world AI in QA isn’t about prompts; it’s about workflow. Train AI to extend tester logic, not replace it. Context-first automation that mirrors human judgment.

Training AI to Think Like a QA: A Real-World Testing Approach

Most “AI for QA” guides sell you a magic vending machine: insert prompt, receive bug report. Reality check—AI can’t replace human judgment. It can only extend it.


AI shines when it handles grunt work.
Documenting, generating draft test cases, summarizing defects—that’s where it buys you time. But when it comes to assessing user frustration, UX violations, or business context, it’s still blind without you.

Stop feeding it sterile prompts like: