Skip to content

Instantly share code, notes, and snippets.

View giolaq's full-sized avatar
🤠
Always studying

Giovanni Laquidara giolaq

🤠
Always studying
View GitHub Profile
@giolaq
giolaq / 01-update-docs.md
Created June 15, 2025 08:42 — forked from badlogic/01-update-docs.md
Yakety Documentation (Ordered) - LLM-optimized docs with concrete file references

Update Documentation

You will generate LLM-optimized documentation with concrete file references and flexible formatting.

Your Task

Create documentation that allows humans and LLMs to:

  • Understand project purpose - what the project does and why
  • Get architecture overview - how the system is organized
  • Build on all platforms - build instructions with file references
@giolaq
giolaq / oneprompt.txt
Last active May 8, 2025 16:05
oneprompt.txt
Study the codebase in the current directory and develop the following feature using the same coding style and best practices of the project.
Create a new carousel page in the React Native Multi-TV App that fetches and displays data from the VibePope API's. The implementation should include
1. Create a new API service file at services/api.ts that:
- API endpoint from https://vibepope.onrender.com with /api/cardinals
- Defines interfaces for the API response and carousel items
- Implements a fetchCarouselData() function that authenticates with NODE_ENV = production
- Maps the cardinal data (name, biography_text, photo_url) to the format expected by the carousel
import React from 'react';
import { View, ViewStyle, StyleSheet } from 'react-native';
import { scaledPixels } from '@/hooks/useScale';
type SafeZoneProps = {
children: React.ReactNode;
style?: ViewStyle;
top?: boolean;
bottom?: boolean;
left?: boolean;
https://www.amazon.co.uk/gp/mas/beta/redeem/BPY7DX9MVL6RS/d2d63935-fffd-485b-a523-5b4bcc3fca55
fun main(args : Array<String>) {
println("Hello, World!")
}
@giolaq
giolaq / build.gradle
Created April 12, 2021 14:32
build.gradle
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.MLFrame
class HMSDocumentTextRecognizer : DocumentTextRecognizer {
//private val detector = MLAnalyzerFactory.getInstance().remoteDocumentAnalyzer
private val detector = MLAnalyzerFactory.getInstance().localTextAnalyzer
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.google.firebase.ml.vision.FirebaseVision
import com.google.firebase.ml.vision.common.FirebaseVisionImage
class GMSDocumentTextRecognizer : DocumentTextRecognizer {
private val detector = FirebaseVision.getInstance().onDeviceTextRecognizer
class WordSearchAiViewModel(
private val resourceProvider: ResourceProvider,
private val recognizer: DocumentTextRecognizer
) : ViewModel() {
val resultList: MutableLiveData<List<String>> = MutableLiveData()
val resultBoundingBoxes: MutableLiveData<List<Symbol>> = MutableLiveData()
private lateinit var dictionary: List<String>
object DocumentTextRecognizerService {
private fun getServiceType(context: Context) = when {
isGooglePlayServicesAvailable(
context
) -> ServiceType.GOOGLE
isHuaweiMobileServicesAvailable(
context
) -> ServiceType.HUAWEI
else -> ServiceType.GOOGLE