Skip to content

Instantly share code, notes, and snippets.

@ShubhamJain7
Created June 13, 2020 17:58
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ShubhamJain7/d0f7f5dbb7b1c37c9322392d0f3124ef to your computer and use it in GitHub Desktop.
Save ShubhamJain7/d0f7f5dbb7b1c37c9322392d0f3124ef to your computer and use it in GitHub Desktop.

Week 3

13 June, 2020

Week three is done! This week was pretty much a continuation of all the exploration and experimentation done last week. A lot of questions answered too!

Firstly, it turns out that the odd and cryptic WinML was just because Windows doesn't support ONNX opset 10 as of yet (check out the compatibility here). I could just downgrade the opset version right? AAh if only it were that simple in real life😓. Turns out the lower opsets don't support an operator called Dynamic Slice that is crucial for DE⫶TR to work. So WinML is off the table now, atleast for this model. I did create an issue on DE⫶TR's Github repository last week but it didn't receieve much traction. It does, however, bring us to two of or next points of discussion!

Always ask for help! No matter how stupid the question seems, it is always better to seek advice and/or help from those that might know more about the matter than you do. I wasn't sure if it was possible to package python libraries in an NVDA add-on so I just asked the wonderful people on the NVDA Add-ons discussion group. I received so many incredibly warm and helpful replies, it totally made my day😍 ! You can find the discussion here. It would be unfair to thank the add-on community for their help without thanking my mentor Reef for all his guidance and for always making sure I'm on the right track! Yes, you can definitely package python libraries in an add-on. It's as simple as copying the library's .py or .pyd(since we are working on windows) file from Site Packages and paste into the same directory as your python code. Ofcourse the libraries I needed were much more complex (NumPy, Pillow etc.) so I had to copy the whole directory instead. By default, python first looks for imports in the same directory and then at other locations. (Find out all these locations by importing the inbuilt sys package and running sys.path). Using this method I was able to successfully implement the YOLO-v3 darknet model in windows! However the size of the libraries is definetly a concern.

Secondly, TorchScript seems like a much better candidate for this project. The developers over at DE⫶TR pointed me to a PR tht makes life easier by allowing one to download their models in TorchScript form! I decided it would be smarter to test out TorchScript before actually investing time in converting other models to TorchScript form. Although a little over the place, this tutorial is the best to follow. Unfortunately, it isn't written with Windows in mind so I had to jump through a few hoops before finally getting it to work. More specifically, these hoops included ensuring you have the right C++ compilers installed (MSVC in my case), understanding what CMake and Make are and installing them on Windows. Also be sure to run cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch .. with an absolute path to the libtorch directory and not a relative path & run cmake --build . --config Release and not the make command towards the end. Unfortunately, there are a few challenges with this solution too. For starters, the torch.dll file is quite large, making the final add-on size, along with the model weights, around 500Mb! Another issue is that PyTorch can only run on 64-bit versions of Windows and Python while NVDA was built to run using 32-bit versions of Python, even on 32-bit versions of Windows.

Here's to hoping that next week brings solutions to these problems!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment