Skip to content

Instantly share code, notes, and snippets.

View TravnikovDev's full-sized avatar
🌍
Digital nomad | Global citizen

Roman Travnikov TravnikovDev

🌍
Digital nomad | Global citizen
View GitHub Profile

Autophagy, starving, Ozempic: the internet’s new health triangle

I kept seeing posts worshipping autophagy like it’s a free car wash for your cells, so I asked my ChatGPT researcher to pull the signal from the noise. The theme surprised me: the biology is real, the claims are way ahead of the data.

Quick refresher: autophagy is your cell’s recycling bin. Low nutrients = your body takes out the trash. That part is solid. The leap to “starving prevents aging and cancer” is where it breaks.

Here’s the snag I didn’t expect: in humans we rarely measure autophagy directly - it’s invasive. Most “anti-aging/cancer” excitement comes from yeast, worms, mice. In people, fasting improves weight, insulin, maybe inflammation. That’s good. But “live longer and never get cancer”? Not proven.

Also, cancer is weird with autophagy. Early on, cleanup may reduce damage. Later, tumors can hijack autophagy to survive stress. So “more autophagy” is not a universal good - some trials even try blocking it in cancer therapy. That f

@TravnikovDev
TravnikovDev / pandora-but-make-it-physics.md
Created November 28, 2025 16:40
LinkedIn Post - 2025-11-28 11:40

Pandora, but make it physics.

The Istanbul exhibit photos of James Cameron’s Hallelujah Mountains got stuck in my head. Could those cliffs really float? Curiosity won, so I asked my ChatGPT researcher to pull me back from movie magic to Maxwell.

Here’s the short version that changed my mind. Magnetic fields store pressure - think of an invisible air cushion. At 1 tesla you get roughly 0.4 MPa of “push.” Real force, not sci‑fi. Superconductors take it further: they kick fields out and, with flux pinning, lock themselves in place over magnets. That’s the famous hover that doesn’t wobble.

Diamagnetism is the quieter cousin. Everything resists magnetic fields a tiny bit. With absurdly strong fields and gradients, you can levitate a frog or a water droplet. It’s jaw‑dropping. It also needs lab magnets around 16 T. That’s MRI-on-steroids territory.

Now the trade-off I didn’t expect: scale kills the dream. A 1 km thick rock “raft” weighs about 26 MPa. To counter that with magnetic pressure, you are in the ballpa

@TravnikovDev
TravnikovDev / living-on-the-edge-of-smartphone-capabilities.md
Created November 28, 2025 11:27
LinkedIn Post - 2025-11-28 06:27

Living on the edge of smartphone capabilities

I basically live with 2 GB free on my phone. That tiny buffer is my emotional support bar - the line between smooth and chaos.

I got curious why life on the last gigabyte feels so laggy, so I asked my ChatGPT researcher to dig in. The results made that storage bar look very different.

Modern phones write a lot of temporary stuff - caches, logs, thumbnails, updates. Flash storage can’t overwrite in place, it has to shuffle data first. When space is tight, the controller moves things around just to free a block. That busywork is garbage collection, and when the drive is packed, it turns every save into slow motion.

Here’s the picture that stuck: imagine editing a document on a desk covered in papers. To write one sentence, you first have to relocate piles. That’s write amplification - multiple internal writes to finish one user write.

@TravnikovDev
TravnikovDev / ai-didnt-kill-creativity-default-workflows-did.md
Created November 28, 2025 11:19
LinkedIn Post - 2025-11-28 06:19

AI didn’t kill creativity - default workflows did

I kept seeing the same ideas everywhere and got curious. Do we actually have fewer ideas, or just more of the same? I asked my n8n + ChatGPT/Claude research bot to dig. The pattern was too clear to ignore.

Most of us now use AI for research and brainstorming. Because models are trained on similar data and tuned for safe answers, they hand out the same low-hanging fruit. In visuals: cats, dogs, horses. In apps: yet another to-do app with AI. In content: the “10 tips” post. Markets flood, uniqueness falls, margins follow. 🤖

The exceptions stood out. People who inject a weird personal style, a rare data source, or a hard constraint end up shipping work that cuts through the noise. It’s not “don’t use AI.” It’s “don’t outsource taste, direction, or differentiation to AI.”

What’s going on in simple words:

  • Anchoring: If AI leads your first step, you stick near the average it suggests.
@TravnikovDev
TravnikovDev / proprietary-software-might-be-your-safety-net.md
Created November 27, 2025 21:22
LinkedIn Post - 2025-11-27 16:22

Proprietary software might be your safety net 🔒

I wondered what kind of work stays hardest for AI and weekend automations to eat. The blunt thought that hit me: live inside proprietary software and you’re oddly safe.

I asked my n8n + ChatGPT helper to dig in. It even stalled on me mid-run, but the pattern is obvious. AI eats what it can access. Open repos, open APIs, public docs - that’s a buffet. Closed platforms are a locked kitchen with a bouncer and a dress code.

Think of the big vendor kingdoms: ERP suites like SAP, EHR systems like Epic, enterprise CRM like Salesforce, design stacks like Adobe, GIS like ArcGIS, finance stacks from Oracle, telecom and banking cores. Popular, mission critical, license-gated, compliance-heavy.

Why they resist automation: access is paywalled, APIs are limited, SDKs need certs, sandboxes are gated, rate limits bite, and production sits behind SSO, VPNs, and audit trails. Your n8n flow or scrappy Python script cannot just stroll in. Even RPA breaks when a tiny UI label mo

When the Bot Sounds Brighter Than Your Friends: How English-First AI Hijacks Our Voice, Tugs Our Feelings, and Nudges Our Wallets

You type for an hour. The bot answers in ten seconds - with a line so sharp you wish you’d said it. In English it sounds like a genius. In your language, it trips over its shoelaces. Magic? Mirror? Marketing trick? This is the love story - and cautionary tale - of how AI can make us feel smarter, closer, and sometimes a little more alone.

I got curious and asked my ChatGPT researcher-bot to dig into this feeling. The pattern is real. In English, models flex. In Russian, they often wobble. Not your imagination.

Here’s the simple reason: training diet decides talent. Most of the data is English, so the model’s “home court” is English. Multilingual tuning helps, but the gap sticks around on hard tasks and nuanced writing. The weird hack that works: ask in Russian, have it reason in English, then translate back. Not perfect, but it boosts accuracy. You’re not dumb - you’re sparring

After the sugar high: why e-commerce feels tougher in 2025

I got curious about the “everything is down” narrative. My n8n researcher pulled proper sources - Census Bureau, Digital Commerce 360, eMarketer, Adobe - and the picture is more interesting than the Google Trends lines suggest.

Big picture: U.S. e-commerce dollars are at record highs. Q2 2025 online sales were roughly $304B and about 16.3% of retail - up from ~11% pre-2020. 2024 full-year e-commerce hit about $1.19T. Growth didn’t crash. It normalized.

The “decline” many of us feel is mostly the hype deflating. Search interest for eBay, Etsy, Temu is off their peaks, but search is not sales. It’s a noisy proxy, not a P&L.

What actually shrank: some marketplaces and margins. eBay GMV peaked near $100B in 2020, then slid to ~73-75B in 2023-2024, with a 2025 rebound but still below peak. Etsy’s GMS has hovered around $12-13B for years - softer than 2020 mania. Amazon is the outlier - retail kept growing, and Prime Day 2025 set new spend records.

@TravnikovDev
TravnikovDev / the-new-hiring-nightmare-the-auto-grader.md
Created November 25, 2025 20:31
LinkedIn Post - 2025-11-25 15:31

The new hiring nightmare: the auto-grader

I got curious about why one-minute, AI-graded assessments feel impossible. Not because the work is hard, but because the grader rewards the wrong things. 🤖

I asked my n8n/ChatGPT researcher to dig in. The pattern is ugly but simple: many LLM auto-graders score verbosity over clarity. In 60 seconds, a normal human writes 2-3 sentences. The model expects paragraphs. That’s not skill, that’s word count.

It’s like judging a painting by square meters. You can cover more canvas, but it doesn’t make it better.

What makes it worse: quirky prompts that don’t reflect real work, untested rubrics, and opaque scoring. Internal tools often ship fast and get audited later (if ever). Some question banks are drafted by generalists or juniors with help from AI, across languages they don’t really use. Coding graders catch correctness on narrow tests, but miss nuance, trade-offs, and real-world constraints.

The real top use cases of ChatGPT (triangulated, not guessed) 🔎

I got curious about what people actually do with LLMs beyond the hype. There’s no public “top queries” board, so I asked my bot to triangulate: OpenAI’s anonymized 1.5M-conversation study, Microsoft’s Work Trend Index, Deloitte’s consumer survey, Stack Overflow’s dev telemetry, Reuters Institute, Elon University, plus Attest. No vibes - just signals.

First big picture: OpenAI’s 2025 study splits messages into Asking/Doing/Expressing at roughly 49%/40%/11%. About 70% of usage is non-work, 30% is work. That frames everything. 📊

Here’s the closest thing to a real Top 10:

  1. Quick answers and explanations - 49% of all messages fall into “Asking” (OpenAI/NBER). It’s mostly “explain this” and “what should I do?”
  2. Writing and drafting - the biggest chunk inside “Doing” (OpenAI) and 48% of consumers say they use GenAI for writing/communication (Deloitte).
  3. Learning and tutoring - 51% of U.S. adults say their main purpose is informal learning (Elon