Skip to content

Instantly share code, notes, and snippets.

Last active November 18, 2022 11:53
What would you like to do?
GR 4.0 Runtime Requirements

GR 4.0 API Requirements

As we revisit the C++ implementation of GR 4.0, this doc seeks to define the intended usage of GNU Radio and where we can modify the behavior to which users are accustomed.

NOTE: This is not intended to just represent the API as it currently stands, but elicit discussion re: how far we are willing to stray from the current GNU Radio usage paradigm. Examples are not dictating API, but provided to show generally how objects interact

Top level design goals

  • Keep Python interface a thin wrapper over C++ API
  • Avoid Python-only implementations outside of OOT modules
  • Modular runtime swappable components both in and out of tree
  • Get block developers to "insert code here" without lots of boilerplate or complicated code

Expected Usage

Python Flowgraph Setup

would like to keep the interface similar to GR3

from gnuradio import gr, module_x, module_y

fg = flowgraph()

b1 = module_x.block_a_f(...)
b2 = module_y.block_b_f(...)
b3 = module_y.block_c_f(...)

fg.connect([b1, b2, b3])
# or fg.connect(b1, "port_name", b2, "port_name")
# or fg.connect(b1, 0, b2, 0)


C++ Flowgraph Setup

#include <gnuradio/module_x/block_a.h>
#include <gnuradio/module_y/block_b.h>
#include <gnuradio/module_y/block_c.h>
#include <gnuradio/flowgraph.h>

using namespace gr;

auto fg = flowgraph();

// b1,b2,b3 are shared_ptrs to base block class
auto b1 = module_x::block_a<float>::make(.../*params*/);
auto b2 = module_y::block_b<float>::make(.../*params*/);
auto b3 = module_y::block_c<float>::make(.../*params*/);

// etc, connect overloads
// variadic connect method - can this be wrapped into python

Multiple Implementations per Block

  • Each block should be able to share the base code (ports, parameters) but allow multiple implementations for operation in various domains
    • e.g. python: b1_cuda = module_x.block_a(..., impl=CUDA)
    • e.g. c++: auto b1_cuda = module_x::block_a<float, CUDA>(...)
  • A separate implementation would have separately defined constructor, work method and member variables



The runtime is responsible for the interface to deploy and execute a flowgraph. There are no real requirements on the API other than it accept a flowgraph object through a constructor or initialize member. Runtime API doesn't need to be consistent between implementations. A cloud based runtime might have deploy methods, whereas the default host cpu runtime might just have start and stop

If more than one scheduler is involved in the execution, the runtime is responsible for partitioning the flowgraph into subgraphs and handing off appropriately to the configured or default schedulers


The scheduler interface is responsible for execution of part (or all) of a flowgraph. Schedulers are assumed to have an input queue and the only public interface is for other entities (either from the runtime or other schedulers) push a message into the queue that can represent some action.

These messages can be:

  • Indication that streaming data has been produced on a connected port
  • An asynchronous PMT message (indication to run callback)
  • Other runtime control (start, stop, kill)


  • At runtime carries a container of edges and a container of blocks representing the graph connections
  • Has gr::runtime methods start, stop, etc. that wrap those of a default runtime


  • Block has ports (see below for runtime definition requirement)
  • A node with a work method and other properties/methods to aid in work
  • work can be called outside of a GR Scheduler context
    • e.g. instantiate a block, call work() with appropriate buffer parameters
    • In python with some wrapping, I should be able to call[np arrays],[np arrays])
      • This is why work has work_io structs passed in rather than directly operating on the internally stored ports
  • Parameters - PMT objects that hold values that can be instantiated via constructor or dynamically changed
  • Constructor - Prefer this to remain a block_args struct so constructor signature doesn't change when constructor args are added or removed

Note: in current GR4, block derives from nodewhich more generally defines anything that has ports and can be connected together. Agraph` object is a node. Not sure if this hierarchy is necessary


  • A typed representation of the incoming or outgoing data to/from a block


  • Type: Stream or Message (these are 2 distinct things as stream triggers work() and message triggers other callback method)
  • Name: String
  • Index: TBD - would be nice to still be able to index ports by integer
  • Buffer: Return a reference to the buffer reader or writer associated with the port
  • Connect Method: Indicate the


  • A representation of the logical connection between ports
  • Needs to be containable and carried by flowgraph objects


  • Needs to be containable in edge/port objects
  • Need to be generically callable from block work method
    • block work function cannot assume specific instance of buffer


  • Access to underlying data via span
  • Number of items available to read or write
  • Thread Safety

Packetized Streams vs. Asynchronous Message Ports

I believe these both need to exist. The GR3 Async message ports provide a simple mechanism to do non-signal processing related things such as setting parameter values that should not be part of the work method

For packetized streams, data with metadata would be placed in the input buffer of a port and processed in the work function

add<pdu>::work() {
  // unpack the pdus on port1 and port2
  // create a new pdu for the output port
  // perform the add operation

template<typename T>
add<T>::work() {
  // use volk to add the data from port1 and port2 directly on the output port buffer

Runtime Block Construction (e.g. Python)

We need to retain the ability to define blocks entirely in python:

  • Ports need to be added at runtime
  • Buffers need to be accessible from work at runtime
class myblock : gr.block
  def __init__(*args, **kwargs):

  def work(wio):
    # get np arrays from input ports
    # get mutable np arrays output ports
    # produce and consume
    return gr.work_return_t.OK

Custom Buffers

Need to maintain the ability to have custom buffer classes potentially defined and compiled out of tree swappable at runtime

Example of perceived usage

from gnuradio import myOOT
from gnuradio import gr, blocks

b1 = blocks.vector_source_f()
b2 = blocks.vector_sink_f()

fg = flowgraph()
e = fg.connect([b1,b2])
# At this point, nothing is known about what buffers will actually be used

# Indicate to the runtime (flowgraph) that this edge should use the improved buffer
# If the above line was commented out, the runtime would use the default buffer type (presumable the vmcircbuf flavor)

fg.initialize() # this is where buffers are actually instantiated, or fg.start() would call initialize if not already initialized

YAML based block design workflow

As an aid to getting block developers to "insert code here", the yaml entrypoint provides a place to specify:

  • Ports
  • Parameters (i.e. constructor arguments with setters/getters)
  • Top level properties (e.g. block type)
  • Supported Types
  • ...

The functionality as it currently stands is to autogenerate a lot of stuff:

  • Python bindings
  • GRC Bindings
  • Convenience methods (set_, get_ callbacks)
  • Reflection methods
  • Parameter Objects and associated methods
  • RPC hooks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment