By training Google's Teachable Machine to learn our gestures, we could swing our hands or make a face to send the corresponding reaction command to Animal Crossing's API.
The NSO app on the phone allows us to send reaction commands to the game. By using a tool called mitmproxy, we could know what requests are sent from our phone and simulate the reaction command.
https://gist.github.com/7a1e3d97a6460bf6e6f92f5a756b16d5
Or use pip install mitmproxy.
https://gist.github.com/ca6986f0fa1f7f3f3f0ac3328dc1fc92
With your phone connected to the same internet as your computer, visit http://mitm.it/ and install the certificate. In the internet settings on your phone, add a manual proxy that points to your computer's IP address.
Checking IP Address on your Mac
Setting Manual Proxy
Download Certificate on http://mitm.it
About > Certificate Trust Settings > Enable Certificate
Now launch the NSO app on the phone and play around with the Animal Crossing App. You should see your phone's request data coming in through the mitmproxy terminal. We can start finding out the request format of reactions by sending them from our phone.
The request endpoint for messaging and reaction is api/sd/v1/messages. Click on it and you should see the cookies and form data of this post request.
The post data is as follows. https://gist.github.com/281b41d4371833d83c321fc5a099667f
Tip: Press q in the mitmproxy terminal to return to the request list.
These are some of the reaction types I've collected: Hello, Greeting, HappyFlower, Negative, Apologize, Aha, QuestionMark...
Note: I don't have all the reactions in my game right now. It would be great if anyone could provide the other reaction values!
Access to Nintendo Switch API requires making multiple requests to Nintendo's server with an authentication token. Full tutorial: {% link https://dev.to/mathewthe2/intro-to-nintendo-switch-rest-api-2cm7 %}
Successful authentication will give us three values:
- _g_token cookie
- _park_session cookie
- authentication bearer token
https://gist.github.com/3f8c37a81130e65b762e89dfc47c6cd4
Test and see if it works :)
https://gist.github.com/eb9f60246729b04de70fc4bac56fc8fb
Google's Teachable Machine is an easy-to-use online tool to train models to recognize your speech, photo, and video. If you're new to machine learning, I highly recommend watching Google's 5 minute tutorial.
First create a Pose Project.
Choose Webcam for Pose Samples. Name your first class neutral and record yourself without any gestures. Then add extra classes such as clapping or waving. You can be as creative as you want.
When you're done, press train. When training is done, you can test the model in preview. Once you're satisfied, press Export Model above the preview panel and download the TensorFlow model.
We can use the provided Tensorflow.js Sample Script for a simple user interface. Copy the sample script to an empty html file and serve it through Node.js.
https://gist.github.com/d3e334338795781d135859b12cef6645
Insert our API call inside the predict() function. The API endpoint should direct to our python server to send the reaction.
https://gist.github.com/0b259fe8e511e32f0ac8531be8bc2e4f
Be creative and have fun!
- Reverse engineer private APIs with mitmproxy
- Send API requests with Python
- Use Google's Teachable Machine for ML prototyping
Setting-up mitmproxy on macOS to intercept https requests
Congratulations on finishing this tutorial! Let me know if it was helpful. :)