I hereby claim:
- I am nchaimov on github.
- I am nchaimov (https://keybase.io/nchaimov) on keybase.
- I have a public key whose fingerprint is D64A E10A 7F24 BC7D 3E1A 7F8A 96F5 E0A5 BC6A 8B88
To claim this, I am signing this object:
| #include "rose.h" | |
| class InheritedAttribute { | |
| }; | |
| class visitorTraversal : public AstTopDownProcessing<InheritedAttribute>{ | |
| public: | |
| virtual InheritedAttribute evaluateInheritedAttribute(SgNode* n, InheritedAttribute inheritedAttribute); | |
| }; | |
| InheritedAttribute visitorTraversal::evaluateInheritedAttribute(SgNode* n, InheritedAttribute inheritedAttribute) { SgExprStatement * expr = isSgExprStatement(n); | |
| if(expr != NULL) { |
I hereby claim:
To claim this, I am signing this object:
| #include <stdio.h> | |
| #include <string.h> | |
| #include <stdlib.h> | |
| #include <netdb.h> | |
| #include <sys/types.h> | |
| #include <sys/socket.h> | |
| #include <arpa/inet.h> | |
| int lookup_host (const char *host, int use_canonname) { | |
| struct addrinfo hints, *res; |
| #ifndef _GNU_SOURCE | |
| #define _GNU_SOURCE | |
| #endif | |
| #include <stdlib.h> | |
| #include <stdio.h> | |
| #include <sys/types.h> | |
| #include <sys/socket.h> | |
| #include <arpa/inet.h> | |
| #include <netdb.h> |
| #!/bin/env python | |
| """TAU trial data for TAU Profile.x.y.z format profiles | |
| Parses a set of TAU profile files and yields multi-indexed Pandas dataframes for the | |
| interval and atomic events. | |
| """ | |
| from __future__ import print_function | |
| import csv | |
| import glob | |
| import mmap |
| #!/bin/env python | |
| """TAU trial data for TAU Profile.x.y.z format profiles | |
| Parses a set of TAU profile files and yields multi-indexed Pandas dataframes for the | |
| interval and atomic events. | |
| """ | |
| from __future__ import print_function | |
| import csv | |
| import glob | |
| import mmap |
| """ | |
| `Learn the Basics <intro.html>`_ || | |
| **Quickstart** || | |
| `Tensors <tensorqs_tutorial.html>`_ || | |
| `Datasets & DataLoaders <data_tutorial.html>`_ || | |
| `Transforms <transforms_tutorial.html>`_ || | |
| `Build Model <buildmodel_tutorial.html>`_ || | |
| `Autograd <autogradqs_tutorial.html>`_ || | |
| `Optimization <optimization_tutorial.html>`_ || | |
| `Save & Load Model <saveloadrun_tutorial.html>`_ |
| #!/usr/bin/env python | |
| # coding: utf-8 | |
| # # Microbenchmarking Neuron Devices (Trn1/Inf2) | |
| # ## Introduction | |
| # | |
| # This guide reviews the best practices for benchmarking performance of Neuron devices. It shows how to separate compilation and execution time, how to isolate the device time from the end-to-end execution time, how to warm-up the device, and covers few pitfalls one should be aware of. This guide provides an example code, in PyTorch, that can be used as a template for measuring performance. | |
| # | |
| # This Jupyter notebook should be run on a Trn1/Inf2 instance (trn1.2xlarge/inf2.xlarge or larger). |