Skip to content

Instantly share code, notes, and snippets.

@kazuki
Last active May 16, 2017 15:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kazuki/593280656fc69c44226c0ebb4a743fdc to your computer and use it in GitHub Desktop.
Save kazuki/593280656fc69c44226c0ebb4a743fdc to your computer and use it in GitHub Desktop.
hsainfo (ROCm 1.5 + Ryzen + R9 Nano)
Number of platforms 1 [23/1755]
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.0 AMD-APP.internal.dbg (2415.0)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_object_metadata cl_amd_event_callback cl_amd_offline_devices
Platform Extensions function suffix AMD
Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx803
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 1.2
Driver Version 1.1 (HSA,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Available Yes
Device Profile FULL_PROFILE
Max compute units 64
Max clock frequency 1000MHz
Device Partition (core)
Max number of sub-devices 64
Supported partition types none specified
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Compiler Available Yes
Linker Available Yes
Preferred work group size multiple 64
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals No
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 4294967296 (4GiB)
Error Correction support No
Max memory allocation 3221225472 (3GiB)
Unified memory for Host and Device No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Global Memory cache type Read/Write
Global Memory cache size 16384
Global Memory cache line 64 bytes
Image support No
Local memory type Local
Local memory size 65536 (64KiB)
Max constant buffer size 3221225472 (3GiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties
Out-of-order execution No
Profiling Yes
Prefer user sync for interop Yes
Profiling timer resolution 1ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_amd_media_ops cl_amd_media_ops2 cl_khr_subgroups cl_khr_depth_images
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) AMD Accelerated Parallel Processing
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [AMD]
clCreateContext(NULL, ...) [default] Success [AMD]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx803
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx803
HSA SystemInfo
* version: 1.1
* timestamp: 2182.101814726 [s] (freq: 1000 [MHz])
* signal-max-wait: 18446744073709551615
* endianness: Little
* machine_model: Large
AgentInfo
* name: AMD Ryzen 7 1800X Eight-Core Processor
* vendor: CPU
* feature: AgentDispatch
* machine_model: Large
* profile: Full
* float_rouding: Near
* base_profile_float_rouding: Near
* fast_f16_operation: false
* wavefront size: 0
* workgroup max dim: [0, 0, 0]
* workgroup max size: 0
* grid max dim: Dim3 { x: 0, y: 0, z: 0 }
* grid max size: 0
* fbarrier max size: 0
* queues max: 0
* queue max size: 0
* queue min size: 0
* queue type: Multi
* node: 0
* device: CPU
* isa: ISA { handle: 0 }
* version: 1.1
| RegionInfo
| * segment: Global
| * global flags: {KernArg, FineGrained}
| * size: 16840175616
| * alloc max size: 16840175616
| * runtime alloc allowed: true
| * runtime alloc granule: 4096
| * runtime alloc alignment: 4096
| RegionInfo
| * segment: Global
| * global flags: {CoarseGrained}
| * size: 16840175616
| * alloc max size: 16840175616
| * runtime alloc allowed: true
| * runtime alloc granule: 4096
| * runtime alloc alignment: 4096
AgentInfo
* name: gfx803
* vendor: AMD
* feature: KernelDispatch
* machine_model: Large
* profile: Base
* float_rouding: Near
* base_profile_float_rouding: Near
* fast_f16_operation: false
* wavefront size: 64
* workgroup max dim: [1024, 1024, 1024]
* workgroup max size: 1024
* grid max dim: Dim3 { x: 4294967295, y: 4294967295, z: 4294967295 }
* grid max size: 4294967295
* fbarrier max size: 32
* queues max: 128
* queue max size: 131072
* queue min size: 4096
* queue type: Multi
* node: 1
* device: GPU
* isa: ISA { handle: 140671271027024 }
* version: 1.1
| RegionInfo
| * segment: Global
| * global flags: {CoarseGrained}
| * size: 4294967296
| * alloc max size: 4294967296
| * runtime alloc allowed: true
| * runtime alloc granule: 4096
| * runtime alloc alignment: 4096
| RegionInfo
| * segment: Group
| * global flags: {}
| * size: 65536
| * alloc max size: 0
| * runtime alloc allowed: false
| * runtime alloc granule: 0
| * runtime alloc alignment: 0
| RegionInfo
| * segment: Global
| * global flags: {FineGrained, KernArg}
| * size: 16840175616
| * alloc max size: 16840175616
| * runtime alloc allowed: true
| * runtime alloc granule: 4096
| * runtime alloc alignment: 4096
| RegionInfo
| * segment: Global
| * global flags: {CoarseGrained}
| * size: 16840175616
| * alloc max size: 16840175616
| * runtime alloc allowed: true
| * runtime alloc granule: 4096
| * runtime alloc alignment: 4096
Initializing the hsa runtime succeeded.
Checking finalizer 1.0 extension support succeeded.
Generating function table for finalizer succeeded.
Getting a gpu agent succeeded.
Querying the agent name succeeded.
The agent name is gfx803.
Querying the agent maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
"Obtaining machine model" succeeded.
"Getting agent profile" succeeded.
Create the program succeeded.
Adding the brig module to the program succeeded.
Query the agents isa succeeded.
Finalizing the program succeeded.
Destroying the program succeeded.
Create the executable succeeded.
Loading the code object succeeded.
Freeze the executable succeeded.
Extract the symbol from the executable succeeded.
Extracting the symbol from the executable succeeded.
Extracting the kernarg segment size from the executable succeeded.
Extracting the group segment size from the executable succeeded.
Extracting the private segment from the executable succeeded.
Creating a HSA signal succeeded.
Finding a fine grained memory region succeeded.
Allocating argument memory for input parameter succeeded.
Allocating argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Dispatching the kernel succeeded.
Passed validation.
Freeing kernel argument memory buffer succeeded.
Destroying the signal succeeded.
Destroying the executable succeeded.
Destroying the code object succeeded.
Destroying the queue succeeded.
Freeing in argument memory buffer succeeded.
Freeing out argument memory buffer succeeded.
Shutting down the runtime succeeded.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment