Skip to content

Instantly share code, notes, and snippets.

@MilesLitteral
Last active March 15, 2023 06:04
Show Gist options
  • Save MilesLitteral/3ae5c427eab97b32d87b7e311d028efe to your computer and use it in GitHub Desktop.
Save MilesLitteral/3ae5c427eab97b32d87b7e311d028efe to your computer and use it in GitHub Desktop.
Futhark-Metal Notes and Readings
FM Dylib Components, Functions, Interfaces
Interfaces: MetalMath, MetalGraphics
MetalGeometry MetalCreate
Main Actors:
GenerateDevice(),
PrepareData(A, B, Result);
Execute();
ExecuteOn(device);
Functions:
MetalMath (GPU Math Utils)
NS: Basic
Add(), Sub(), Mult(), Div()
Sqr(), Root(), Random()
NS: Map
Combine(), Concat(), Sum()
NS: Linear
Conjunction(), Disjunction(), Linear()
MetalGraphics (2D Graphics on GPU)
Textures(), ViewController(), Primitives2d(), Materials2d()
MetalGeometry (3D Graphics on GPU)
Primitives3d(), Materials3d(),
Shader(), OpenGL? (MGL), World()
MetalCreate (System Utilities)
MCreateFromString(source)
MCreateFromPath(path)
MCreateFromFut(fut)
MCreateFromHs(haskell)
MCreateFromForeign(str)
MCreateLib(scripts,device)
MCreateDyLib(Args)
Bonus: MoltenVK bridge
MVKCreateInstance
MVKBuffer
MVKPipeline
MVKExecuteOn
Euthark Metal :: Project
Step 0: Research metal shader language(MSL), FFI in Haskell, FutT and metal libs; compile notes digitally
Step 1: create a basic metallib with mtl++ ✅
Step 2: create a basic Metaldylib with mtl+ 🅾️
Step 3: create FutharkMetal.dylib 🟥
Create C++ Interface
ns::Error handlerGlobal
mtlpp::Device GenerateDevice()
void PrepareData([F32], [F32], Result)
void Execute(Device)
void ExecuteOn(Device, ErrorHandler)
OpenCL Experiment Reflection
Create a Metal Reflection of Futhark compiled OpenCL code
List what must be edited in the following codebases to condone Metal Support in Futhark
Futhark
Wrap.hs
Futhask
CodeGen.Hs
Getting Started with tensorflow-metal PluggableDevice
Accelerate training of machine learning models with TensorFlow right on your Mac. Install TensorFlow and the tensorflow-metal PluggableDevice to accelerate training with Metal on Mac GPUs.
Learn more about TensorFlow PluggableDevices
OS Requirements
macOS 12.0+ (latest beta)
Currently Not Supported
Multi-GPU support
Acceleration for Intel GPUs
V1 TensorFlow Networks
Installation Instructions
Step 1: Environment setup
x86 : AMD
Create virtual environment (recommended):
python3 -m venv ~/tensorflow-metal
source ~/tensorflow-metal/bin/activate
python -m pip install -U pip
NOTE: python version 3.8 required
arm64 : Apple Silicon
Download and install Conda env:
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
Install the TensorFlow dependencies:
conda install -c apple tensorflow-deps
When upgrading to new base TensorFlow version, we recommend:
# uninstall existing tensorflow-macos and tensorflow-metal
python -m pip uninstall tensorflow-macos
python -m pip uninstall tensorflow-metal
# Upgrade tensorflow-deps
conda install -c apple tensorflow-deps --force-reinstall
# or point to specific conda environment
conda install -c apple tensorflow-deps --force-reinstall -n my_env
tensorflow-deps versions are following base TensorFlow versions so:
For v2.5:
conda install -c apple tensorflow-deps==2.5.0
For v2.6:
conda install -c apple tensorflow-deps==2.6.0
NOTE: python versions 3.8 and 3.9 supported
Step 2: Install base TensorFlow
python -m pip install tensorflow-macos
NOTE: If using conda environment built against pre-macOS 11 SDK use:
SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-macos
otherwise you will get errors like : “not a supported wheel on this platform”
Step 3: Install tensorflow-metal plugin
python -m pip install tensorflow-metal
See Also:
https://developer.apple.com/metal/tensorflow-plugin/
SasoriZero Jan 28 00:50
Hello all! Its been a minute but I have good news! I have made progress with adding Metal to Futhark! Nothing too concrete as my work is now in the early intermediate stage but The possibility is becoming
more and more promising with work.
I am utilizing a Toll-Free-Bridging method which Futhark should be able to utilize easily the metal code I am working with is in C++ rather than Objective-C or Swift next will do some experiments in code generation and get back to everybody (Tonight), Any questions welcome
Troels Henriksen Jan 28 00:06
@MilesLitteral What is Toll-Free-Bridging?
@gusten:matrix.org There is also FUTHARK_PROGRAM_ERROR that you can perhaps make use of: https:/futhark.readthedocs.io/en/latest/c-api.html#error-codes
SasoriZero Jan 28 00:50
@athas When it comes to MacOS are a number of data types in the Core Foundation
framework and the Foundation framework that can be used interchangeably. This capability, called toll-free bridging, means that you can use the same data type as the parameter to a Core Foundation function call (C/C++) or as the receiver of an Objective-C message.
Additionally objective c and c++ use the same compiler and can therefore make references to each other, this can be further augmented to have the C++ talk to swift, in which a metal library can be built, via c++, that futhark could then communicate with to condone GPU calculations on Macs and M1 GPU/APUs The entries.cpp will end up being much
smaller than entries.c
Troels Henriksen Jan 28 00:52
What is entries.cpp/entries.c in this case?
SasoriZero Jan 28 00:55
Entries.cpp would be an interface full of functions that reference a dynamic library
which futhark would use to do its business on an M1, by virtue of this dynamic library
being what it is Futhark can more or less use it as its main .metal, metallib and metaldylib generation interface without having to write, say, a code generator from scratch. The dylib would define extremely basic operations that futhark can manipulate to create a CommandBuffer which the M1 would then execute on GPU similar to how CUDA Functions I made a shell script for the moment that automates compiling Metal but its a little too hacky in my opinion to formalize into the codebase that Futhark
or Haskell should just be able to call an interface with when necessary to compile on
M1.
Troels Henriksen Jan 28 00:59
Great, that sounds excellent. That takes care of the host side of things,
and if the presented API is reasonably similar to either OpenCL or NVRTC,
then it will be very easy to generate code for it. Then the remaining challenge becomes generating the device code.
SasoriZero Jan 28 01:03
It very much is and even streamlines things like memory or error handling nicely,
I was most worried about generating .metal but if it comes from templates this will
not only be very safe but also very flexible; What do you mean when you say ‘generating device code’? Its been of interest but I’m curious as to your meaning
Troels Henriksen Jan 28 01:05
Sorry, I mean generating the shaders/kernels or whatever Metal calls it.
The code that actually runs on the GPU. Does Metal use SPIR-V?
SasoriZero Jan 28 01:07
I don’t believe so as there are some issues I’ve seen on Git which point out people who like to try to use a Vulkan Compatible
SPIR-V but cannot because its not necessarily Metal Compliant. There are work arounds to this somewhat like using the Metal-API rather than the Metal.h like Vulkan can and some libraries do to
support macs But using metal api is a workaround at best while using a metallib or metaldylib is a more native solution
Troels Henriksen Jan 28 01:05
That's fine; none of the existing GPU backends use SPIR-V. So do you specify the code that runs on the GPU?
Oh, MSL: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/performing_calculations_on_a_gpu
Looks a lot like OpenCL, so that's great.
SasoriZero Jan 28 01:17
Metal/Metal Shader Language/Metal 2 yeah heh this is very simplified but
goes something like this: you create a kernel (metal keyword), and can include any header you want with it etc it
just must include metal_stdlib which is a reserved phrase Save it as a .metal. You write a cpp script that creates three objects:
a Device,
a CommandBuffer,
and a CommandQueue
that are used to prepare information for execution on gpu Next you make a NewLibrary declaration, write NewFunction declarations which contain strings of the metal kernels
Lastly You write a pipeline function which tells the GPU you are giving it a library and that this library has functions which will be executed on gpu, then its as simple as a call
In main Or wherever the returned data is needed This is why I am writing a
metaldylib at the moment because this process can be simplified down to just
generation and execution with minimal boilerplate writing like GenerateDevice(), prepareData(arrays), compute(arrays)
Troels Henriksen Jan 28 01:20
Is writing a manual parser off the table? Futhark is relatively simple i would think,
and my experience with parser generators is that they dont work that well for anything
more that a really simple format Everyone would wish.
Theres MoltenVK which lets you run vulkan including spirv shaders on metal but there are probably standalone spirv-to-msl compilers also, The current parser works well.
The reason I investigated Megaparsec was to get better errors, and to easily compose it within other parsers (e.g. for Literate Futhark). None of those advantages would apply to a hand-written parser.
SasoriZero 00:02
I will keep this in mind as the point of the dylib is to simplify conversion back and fourth from Metal to futhark or metal to haskell you said “Is writing a manual parser off the table?” Its not, though I’m curious how you’d go about it or what your preference
Troels Henriksen 00:55
I'm not sure what you're asking.
SasoriZero 08:26
I guess I’m asking: when it comes to Futhark does it have a “style” to how it compiles kernels or a “method” thats clearly established, I wouldn’t want to write code that violates the established paradigm and instead augments it. Metal Scripts could just be strings, generated strings, in fut, and no metal files need to be truly handled
Troels Henriksen 08:48
I'm not sure what your last sentence means.
For the OpenCL and CUDA backends, the Futhark compiler generates a single OpenCL/CUDA device program that is compiled with the appropriate API function,
e.g. this one: https://www.khronos.org/registry/OpenCL/sdk/2.2/docs/man/html/clCreateProgramWithSource.html
That program usually exposes many different kernels. But this part is pretty flexible and easy to change to fit backend idiosyncracies.
SasoriZero 08:50
Ahhh Okay 👍🏽 thanks! I will keep that in mind as I code metal has a function equivalent to clCreateProgramWithSource
Troels Henriksen 08:51
You might want to compile a simple Futhark program with the OpenCL backend and see what it looks like.
SasoriZero 09:08
Will do
This is the next immediate thing I'm doing while making a dylib in general for fut
//For Futhark
//resolver: lts-14.4
#Objectives
Implement the dotproduct experiment in futhark execute dotproduct in opencl,
study export reflect opencl style in metal in the interest of the futhark
parser easily making Metal in the same style as it makes OpenCL and CUDA code/kernels
Principal Functions:
OpenCL
cl_program clCreateProgramWithSource(
cl_context context,
cl_uint count,
const char** strings,
const size_t* lengths,
cl_int* errcode_ret);
Metal
//By Default, Metallib is made at compilation time however this is an alternative test to run const chars as Metal Scripts
//Possibly more Important for Futhark
void generateMetalLib(const char *src, mtlpp::Device device){
//Replace with SRC
const char shadersSrc[] =
"#include <metal_stdlib>";
"using namespace metal;";
"kernel void sqr(";
"const device float *vIn [[ buffer(0) ]],";
"device float *vOut [[ buffer(1) ]],";
"uint id[[ thread_position_in_grid ]])";
"{";
"vOut[id] = vIn[id] * vIn[id];";
"}";
ns::Error* error = NULL; //nullptr
mtlpp::Library library = device.NewLibrary(shadersSrc, mtlpp::CompileOptions(), error);
assert(library);
mtlpp::Function sqrFunc = library.NewFunction("sqr");
assert(sqrFunc);
mtlpp::ComputePipelineState computePipelineState = device.NewComputePipelineState(sqrFunc, error);
assert(computePipelineState);
mtlpp::CommandQueue commandQueue = device.NewCommandQueue();
assert(commandQueue);
}
See Also: https://developer.apple.com/documentation/metal/mtldevice/1433431-makelibrary
Code:
Futhark
let main (x: []i32) (y: []i32): i32 = reduce (+) 0 (map2 (*) x y)
Metal
#include <metal_stdlib>
//function to calculate dot product of two vectors
using namespace metal;
kernel void mtlDotProduct(device const float* inA, device const float* inB, device float* product, uint index [[thread_position_in_grid]])
{
// the for-loop is replaced with a collection of threads, each of which
// calls this function.
product = product + inA[index] * inB[index];
}
kernel void mtlCrossProduct(device const float* vector_a[], device const float* vector_b[], device float* temp) {
temp[0] = vector_a[1] * vector_b[2] - vector_a[2] * vector_b[1];
temp[1] = -(vector_a[0] * vector_b[2] - vector_a[2] * vector_b[0]);
temp[2] = vector_a[0] * vector_b[1] - vector_a[1] * vector_b[0];
}
C++
#include <mtlpp.hpp>
//#include <bits/stdc++.h>
#define size 3
const unsigned int arrayLength = 10; //1 << 24;
const unsigned int bufferSize = arrayLength * sizeof(float);
using namespace std;
//function to calculate dot product of two vectors
int mtlDotProduct(const float* vector_a[], const float* vector_b[], const float* product, int length) {
for (int i = 0; i < length; i++){
product = product + vector_a[i] * vector_b[i];
}
//return product;
}
void mtlCrossProduct(const float* vector_a[], const float* vector_b[], const float* temp[]) {
temp[0] = vector_a[1] * vector_b[2] - vector_a[2] * vector_b[1];
temp[1] = -(vector_a[0] * vector_b[2] - vector_a[2] * vector_b[0]);
temp[2] = vector_a[0] * vector_b[1] - vector_a[1] * vector_b[0];
}
class MetalDotProduct
{
public:
mtlpp::Device _mDevice;
// The compute pipeline generated from the compute kernel in the .metal shader file.
mtlpp::ComputePipelineState _mDotFunctionPSO;
mtlpp::ComputePipelineState _mDotCrossFunctionPSO;
// The command queue used to pass commands to the device.
mtlpp::CommandQueue _mCommandQueue;
// Buffers to hold data.
mtlpp::Buffer _mBufferA;
mtlpp::Buffer _mBufferB;
mtlpp::Buffer _mBufferResult;
MetalDotProduct(mtlpp::Device device)
{
_mDevice = device;
ns::Error* error = NULL;
// Load the shader files with a .metal file extension in the project
mtlpp::Library defaultLibrary = device.NewLibrary("/Users/sasori/Desktop/mtl++/mtl++/mtl++/add.metallib", error);//device.NewDefaultLibrary();
if (defaultLibrary.GetFunctionNames() == NULL)
{
printf("Failed to find the default library.\n");
}
mtlpp::Function addFunction = defaultLibrary.NewFunction("mtlDotProduct");
mtlpp::Function addFunction = defaultLibrary.NewFunction("mtlDotCrossProduct");
// Create a compute pipeline state object.
_mDotFunctionPSO = device.NewComputePipelineState(mtlDotProduct error);
_mCrossDotFunctionPSO = device.NewComputePipelineState(mtlCrossDotProduct, error);
_mCommandQueue = device.NewCommandQueue();
}
void generateRandomFloatData(mtlpp::Buffer buffer)
{
float* dataPtr = (float*)buffer.GetContents();
for (unsigned long index = 0; index < arrayLength; index++)
{
dataPtr[index] = (float)rand()/(float)(RAND_MAX);
}
}
void generateFloatData(mtlpp::Buffer buffer, float vector[])
{
//{ 4, 2, -1 }
//{ 5, 7, 1 };
float* dataPtr = (float*)buffer.GetContents();
for (unsigned long index = 0; index < 1; index++)
{
dataPtr[index] = vector[index];
}
}
void sendComputeCommand(mtlpp::CommandQueue commandQueue)
{
// Create a command buffer to hold commands.
mtlpp::CommandBuffer commandBuffer = commandQueue.CommandBuffer();
// Start a compute pass.
mtlpp::ComputeCommandEncoder computeEncoder = commandBuffer.ComputeCommandEncoder();// computeCommandEncoder];
encodeAddCommand(computeEncoder);
// End the compute pass.
computeEncoder.EndEncoding();
// Execute the command.
commandBuffer.Commit();
// Normally, you want to do other work in your app while the GPU is running,
// but in this example, the code simply blocks until the calculation is complete.
commandBuffer.WaitUntilCompleted();
verifyResults();
}
void encodeCommand(mtlpp::ComputeCommandEncoder computeEncoder, mtlpp::ComputePipelineState state) {
// Encode the pipeline state object and its parameters.
computeEncoder.SetComputePipelineState(state);
computeEncoder.SetBuffer(_mBufferA, 0, 0);
computeEncoder.SetBuffer(_mBufferB, 0, 1);
computeEncoder.SetBuffer(_mBufferResult, 0, 2);
//_mBufferResult = new mtlpp::Buffer(x, y);
mtlpp::Size gridSize = mtlpp::Size(arrayLength, 1, 1);
// Calculate a threadgroup size.
uint32_t threadGroupSize = _mAddFunctionPSO.GetMaxTotalThreadsPerThreadgroup();
if (threadGroupSize > arrayLength)
{
threadGroupSize = arrayLength;
}
mtlpp::Size threadgroupSize = mtlpp::Size(threadGroupSize, 1, 1);
// Encode the compute command.
computeEncoder.DispatchThreadgroups(gridSize, threadgroupSize);
}
void prepareData(mtlpp::Device device, float* vectorA, float* vectorB)
{
// Allocate three buffers to hold our initial data and the result.
_mBufferA = device.NewBuffer(bufferSize, mtlpp::ResourceOptions::StorageModeShared);
_mBufferB = device.NewBuffer(bufferSize, mtlpp::ResourceOptions::StorageModeShared);
_mBufferResult = device.NewBuffer(bufferSize, mtlpp::ResourceOptions::StorageModeShared);
generateFloatData(_mBufferA, vectorA);
generateFloatData(_mBufferB, vectorB);
}
void verifyResults()
{
float* a = (float*)_mBufferA.GetContents();
float* b = (float*)_mBufferB.GetContents();
float* result = (float*)_mBufferResult.GetContents();
for (unsigned long index = 0; index < arrayLength; index++)
{
if (result[index] != (a[index] + b[index]))
{
printf("Compute ERROR: index=%lu result=%g vs %g=a+b\n",
index, result[index], a[index] + b[index]);
//assert(result[index] == (a[index] + b[index]));
}
else{
printf("Compute MATCH: index=%lu result=%g vs %g=a+b\n",
index, result[index], a[index] + b[index]);
}
}
printf("Compute results as expected\n");
}
};
int main() {
//MTLPP
mtlpp::Device _mDevice = mtlpp::Device::CreateSystemDefaultDevice();
MetalDotProduct adder = new MetalDotProduct(device);
// Create buffers to hold data
adder.prepareData(device);
// Send a command to the GPU to perform the calculation.
adder.sendComputeCommand(adder._mCommandQueue);
return 0;
}
Shell
//futhark repl //optional for debugging, Futhark's IRB/IPY
//Futhark Build
futhark c dotprod.fut
futhark opencl dotprod.fut
echo [2,2,3] [4,5,6] | ./dotprod
//Metal build
//futhark metal dotproduct //tba
sh metalBuild.sh
echo [2,2,3] [4,5,6] | ./dotprod
Toll-Free Bridging Fails on Linux (To Be Expected)
manifold7@manifold8:~/Desktop/GitHub/futhark$ stack build
Building all executables for `futhark-metal' once. After a successful build of all of them, only specified executables will be rebuilt.
futhark-metal> build (lib + exe)
Preprocessing library for futhark-metal-0.21.0..
Building library for futhark-metal-0.21.0..
gcc: error: .stack-work/dist/x86_64-linux-tinfo6/Cabal-3.4.1.0/build/Futhark/Pass/ExtractKernels/Interchange.dyn_o: No such file or directory
`gcc' failed in phase `Linker'. (Exit code: 1)
-- While building package futhark-metal-0.21.0 (scroll up to its section to see the error) using:
/home/manifold7/.stack/setup-exe-cache/x86_64-linux-tinfo6/Cabal-simple_mPHDZzAJ_3.4.1.0_ghc-9.0.2 --builddir=.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.4.1.0 build lib:futhark-metal exe:futhark-metal --ghc-options " -fdiagnostics-color=always"
Process exited with code: ExitFailure 1
Test Futhark-Metal on Windows
It does build on Mac successfully
#include <metal_stdlib>
using namespace metal;
typedef struct
{
float time;
} Uniforms;
constexpr sampler textureSampler (mag_filter::linear, min_filter::linear);
kernel void shader(texture2d<float, access::read> texture0 [[texture(0)]],
texture2d<float, access::read> texture1 [[texture(1)]],
texture2d<float, access::read> texture2 [[texture(2)]],
texture2d<float, access::read> texture3 [[texture(3)]],
texture2d<float, access::write> output [[texture(4)]],
constant Uniforms& uniforms [[buffer(0)]],
uint2 gid [[thread_position_in_grid]])
{
int width = output.get_width();
int height = output.get_height();
float2 uv = float2(gid) / float2(width, height);
float4 result = float4(uv, 0.5 + 0.5 * sin(uniforms.time), 1.0);
output.write(result, gid);
}
Toll-Free Bridged Types
There are a number of data types in the Core Foundation framework and the Foundation framework that can be used interchangeably. Data types that can be used interchangeably are also referred to as toll-free bridged data types. This means that you can use the same data structure as the argument to a Core Foundation function call or as the receiver of an Objective-C message invocation. For example, NSLocale (see NSLocale Class Reference) is interchangeable with its Core Foundation counterpart, CFLocale (see CFLocale Reference).
Not all data types are toll-free bridged, even though their names might suggest that they are. For example, NSRunLoop is not toll-free bridged to CFRunLoop, NSBundle is not toll-free bridged to CFBundle, and NSDateFormatter is not toll-free bridged to CFDateFormatter. Table 1 provides a list of the data types that support toll-free bridging.
Note: If you install a custom callback on a Core Foundation collection you are using, including a NULL callback, its memory management behavior is undefined when accessed from Objective-C.
Casting and Object Lifetime Semantics
Through toll-free bridging, in a method where you see for example an NSLocale * parameter, you can pass a CFLocaleRef, and in a function where you see a CFLocaleRef parameter, you can pass an NSLocale instance. You also have to provide other information for the compiler: first, you have to cast one type to the other; in addition, you may have to indicate the object lifetime semantics.
The compiler understands Objective-C methods that return Core Foundation types and follow the historical Cocoa naming conventions (see Advanced Memory Management Programming Guide). For example, the compiler knows that, in iOS, the CGColor returned by the CGColor method of UIColor is not owned. You must still use an appropriate type cast, as illustrated by this example:
NSMutableArray *colors = [NSMutableArray arrayWithObject:(id)[[UIColor darkGrayColor] CGColor]];
[colors addObject:(id)[[UIColor lightGrayColor] CGColor]];
The compiler does not automatically manage the lifetimes of Core Foundation objects. You tell the compiler about the ownership semantics of objects using either a cast (defined in objc/runtime.h) or a Core Foundation-style macro (defined in NSObject.h):
__bridge transfers a pointer between Objective-C and Core Foundation with no transfer of ownership.
__bridge_retained or CFBridgingRetain casts an Objective-C pointer to a Core Foundation pointer and also transfers ownership to you.
You are responsible for calling CFRelease or a related function to relinquish ownership of the object.
__bridge_transfer or CFBridgingRelease moves a non-Objective-C pointer to Objective-C and also transfers ownership to ARC.
ARC is responsible for relinquishing ownership of the object.
Some of these are shown in the following example:
NSLocale *gbNSLocale = [[NSLocale alloc] initWithLocaleIdentifier:@"en_GB"];
CFLocaleRef gbCFLocale = (__bridge CFLocaleRef)gbNSLocale;
CFStringRef cfIdentifier = CFLocaleGetIdentifier(gbCFLocale);
NSLog(@"cfIdentifier: %@", (__bridge NSString *)cfIdentifier);
// Logs: "cfIdentifier: en_GB"
CFLocaleRef myCFLocale = CFLocaleCopyCurrent();
NSLocale *myNSLocale = (NSLocale *)CFBridgingRelease(myCFLocale);
NSString *nsIdentifier = [myNSLocale localeIdentifier];
CFShow((CFStringRef)[@"nsIdentifier: " stringByAppendingString:nsIdentifier]);
// Logs identifier for current locale
The next example shows the use of Core Foundation memory management functions where dictated by the Core Foundation memory management rules:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat locations[2] = {0.0, 1.0};
NSMutableArray *colors = [NSMutableArray arrayWithObject:(id)[[UIColor darkGrayColor] CGColor]];
[colors addObject:(id)[[UIColor lightGrayColor] CGColor]];
CGGradientRef gradient = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, locations);
CGColorSpaceRelease(colorSpace); // Release owned Core Foundation object.
CGPoint startPoint = CGPointMake(0.0, 0.0);
CGPoint endPoint = CGPointMake(CGRectGetMaxX(self.bounds), CGRectGetMaxY(self.bounds));
CGContextDrawLinearGradient(ctx, gradient, startPoint, endPoint,
kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation);
CGGradientRelease(gradient); // Release owned Core Foundation object.
}
Toll-Free Bridged Types
Table 1 provides a list of the data types that are interchangeable between Core Foundation and Foundation. For each pair, the table also lists the version of OS X in which toll-free bridging between them became available.
Table 1 Data types that can be used interchangeably between Core Foundation and Foundation
Core Foundation type
Foundation class
Availability
CFArrayRef
NSArray
OS X 10.0
CFAttributedStringRef
NSAttributedString
OS X 10.4
CFBooleanRef
NSNumber
OS X 10.0
CFCalendarRef
NSCalendar
OS X 10.4
CFCharacterSetRef
NSCharacterSet
OS X 10.0
CFDataRef
NSData
OS X 10.0
CFDateRef
NSDate
OS X 10.0
CFDictionaryRef
NSDictionary
OS X 10.0
CFErrorRef
NSError
OS X 10.5
CFLocaleRef
NSLocale
OS X 10.4
CFMutableArrayRef
NSMutableArray
OS X 10.0
CFMutableAttributedStringRef
NSMutableAttributedString
OS X 10.4
CFMutableCharacterSetRef
NSMutableCharacterSet
OS X 10.0
CFMutableDataRef
NSMutableData
OS X 10.0
CFMutableDictionaryRef
NSMutableDictionary
OS X 10.0
CFMutableSetRef
NSMutableSet
OS X 10.0
CFMutableStringRef
NSMutableString
OS X 10.0
CFNullRef
NSNull
OS X 10.2
CFNumberRef
NSNumber
OS X 10.0
CFReadStreamRef
NSInputStream
OS X 10.0
CFRunLoopTimerRef
NSTimer
OS X 10.0
CFSetRef
NSSet
OS X 10.0
CFStringRef
NSString
OS X 10.0
CFTimeZoneRef
NSTimeZone
OS X 10.0
CFURLRef
NSURL
OS X 10.0
CFWriteStreamRef
NSOutputStream
OS X 10.0
Using C++ With Objective-C
Apple’s Objective-C compiler allows you to freely mix C++ and Objective-C code in the same source file. This Objective-C/C++ language hybrid is called Objective-C++. With it you can make use of existing C++ libraries from your Objective-C applications.
Mixing Objective-C and C++ Language Features
In Objective-C++, you can call methods from either language in C++ code and in Objective-C methods. Pointers to objects in either language are just pointers, and as such can be used anywhere. For example, you can include pointers to Objective-C objects as data members of C++ classes, and you can include pointers to C++ objects as instance variables of Objective-C classes. Listing 14-1 illustrates this.
Note: Xcode requires that file names have a “.mm” extension for the Objective-C++ extensions to be enabled by the compiler.
Listing 14-1 Using C++ and Objective-C instances as instance variables
/* Hello.mm
* Compile with: g++ -x objective-c++ -framework Foundation Hello.mm -o hello
*/
#import <Foundation/Foundation.h>
class Hello {
private:
id greeting_text; // holds an NSString
public:
Hello() {
greeting_text = @"Hello, world!";
}
Hello(const char* initial_greeting_text) {
greeting_text = [[NSString alloc] initWithUTF8String:initial_greeting_text];
}
void say_hello() {
printf("%s\n", [greeting_text UTF8String]);
}
};
@interface Greeting : NSObject {
@private
Hello *hello;
}
- (id)init;
- (void)dealloc;
- (void)sayGreeting;
- (void)sayGreeting:(Hello*)greeting;
@end
@implementation Greeting
- (id)init {
self = [super init];
if (self) {
hello = new Hello();
}
return self;
}
- (void)dealloc {
delete hello;
[super dealloc];
}
- (void)sayGreeting {
hello->say_hello();
}
- (void)sayGreeting:(Hello*)greeting {
greeting->say_hello();
}
@end
int main() {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
Greeting *greeting = [[Greeting alloc] init];
[greeting sayGreeting]; // > Hello, world!
Hello *hello = new Hello("Bonjour, monde!");
[greeting sayGreeting:hello]; // > Bonjour, monde!
delete hello;
[greeting release];
[pool release];
return 0;
}
As you can declare C structs in Objective-C interfaces, you can also declare C++ classes in Objective-C interfaces. As with C structs, C++ classes defined within an Objective-C interface are globally-scoped, not nested within the Objective-C class. (This is consistent with the way in which standard C—though not C++—promotes nested struct definitions to file scope.)
To allow you to conditionalize your code based on the language variant, the Objective-C++ compiler defines both the __cplusplus and the __OBJC__ preprocessor constants, as specified by (respectively) the C++ and Objective-C language standards.
As previously noted, Objective-C++ does not allow you to inherit C++ classes from Objective-C objects, nor does it allow you to inherit Objective-C classes from C++ objects.
class Base { /* ... */ };
@interface ObjCClass: Base ... @end // ERROR!
class Derived: public ObjCClass ... // ERROR!
Unlike Objective-C, objects in C++ are statically typed, with runtime polymorphism available as an exceptional case. The object models of the two languages are thus not directly compatible. More fundamentally, the layout of Objective-C and C++ objects in memory is mutually incompatible, meaning that it is generally impossible to create an object instance that would be valid from the perspective of both languages. Hence, the two type hierarchies cannot be intermixed.
You can declare a C++ class within an Objective-C class declaration. The compiler treats such classes as having been declared in the global namespace, as follows:
@interface Foo {
class Bar { ... } // OK
}
@end
Bar *barPtr; // OK
Objective-C allows C structures (whether declared inside of an Objective-C declaration or not) to be used as instance variables.
@interface Foo {
struct CStruct { ... };
struct CStruct bigIvar; // OK
} ... @end
On Mac OS X 10.4 and later, if you set the fobjc-call-cxx-cdtors compiler flag, you can use instances of C++ classes containing virtual functions and nontrivial user-defined zero-argument constructors and destructors as instance variables. (The fobjc-call-cxx-cdtors compiler flag is set by default in gcc-4.2.) Constructors are invoked in the alloc method (specifically, inside class_createInstance), in declaration order immediately after the Objective-C object of which they are a member is allocated. The constructor used is the “public no-argument in-place constructor.” Destructors are invoked in the dealloc method (specifically, inside object_dispose), in reverse declaration order immediately before the Objective-C object of which they are a member is deallocated.
Mac OS X v10.3 and earlier: The following cautions apply only to Mac OS X v10.3 and earlier.
Objective-C++ similarly strives to allow C++ class instances to serve as instance variables. This is possible as long as the C++ class in question (along with all of its superclasses) does not have any virtual member functions defined. If any virtual member functions are present, the C++ class may not serve as an Objective-C instance variable.
#import <Cocoa/Cocoa.h>
struct Class0 { void foo(); };
struct Class1 { virtual void foo(); };
struct Class2 { Class2(int i, int j); };
@interface Foo : NSObject {
Class0 class0; // OK
Class1 class1; // ERROR!
Class1 *ptr; // OK—call 'ptr = new Class1()' from Foo's init,
// 'delete ptr' from Foo's dealloc
Class2 class2; // WARNING - constructor not called!
...
@end
C++ requires each instance of a class containing virtual functions to contain a suitable virtual function table pointer. However, the Objective-C runtime cannot initialize the virtual function table pointer, because it is not familiar with the C++ object model. Similarly, the Objective-C runtime cannot dispatch calls to C++ constructors or destructors for those objects. If a C++ class has any user-defined constructors or destructors, they are not called. The compiler emits a warning in such cases.
Objective-C does not have a notion of nested namespaces. You cannot declare Objective-C classes within C++ namespaces, nor can you declare namespaces within Objective-C classes.
Objective-C classes, protocols, and categories cannot be declared inside a C++ template, nor can a C++ template be declared inside the scope of an Objective-C interface, protocol, or category.
However, Objective-C classes may serve as C++ template parameters. C++ template parameters can also be used as receivers or parameters (though not as selectors) in Objective-C message expressions.
C++ Lexical Ambiguities and Conflicts
There are a few identifiers that are defined in the Objective-C header files that every Objective-C program must include. These identifiers are id, Class, SEL, IMP, and BOOL.
Inside an Objective-C method, the compiler pre-declares the identifiers self and super, similarly to the keyword this in C++. However, unlike the C++ this keyword, self and super are context-sensitive; they may be used as ordinary identifiers outside of Objective-C methods.
In the parameter list of methods within a protocol, there are five more context-sensitive keywords (oneway, in, out, inout, and bycopy). These are not keywords in any other contexts.
From an Objective-C programmer's point of view, C++ adds quite a few new keywords. You can still use C++ keywords as a part of an Objective-C selector, so the impact isn’t too severe, but you cannot use them for naming Objective-C classes or instance variables. For example, even though class is a C++ keyword, you can still use the NSObject method class:
[foo class]; // OK
However, because it is a keyword, you cannot use class as the name of a variable:
NSObject *class; // Error
In Objective-C, the names for classes and categories live in separate namespaces. That is, both @interface foo and @interface(foo) can exist in the same source code. In Objective-C++, you can also have a category whose name matches that of a C++ class or structure.
Protocol and template specifiers use the same syntax for different purposes:
id<someProtocolName> foo;
TemplateType<SomeTypeName> bar;
To avoid this ambiguity, the compiler doesn’t permit id to be used as a template name.
Finally, there is a lexical ambiguity in C++ when a label is followed by an expression that mentions a global name, as in:
label: ::global_name = 3;
The space after the first colon is required. Objective-C++ adds a similar case, which also requires a space:
receiver selector: ::global_c++_name;
Limitations
Objective-C++ does not add C++ features to Objective-C classes, nor does it add Objective-C features to C++ classes. For example, you cannot use Objective-C syntax to call a C++ object, you cannot add constructors or destructors to an Objective-C object, and you cannot use the keywords this and self interchangeably. The class hierarchies are separate; a C++ class cannot inherit from an Objective-C class, and an Objective-C class cannot inherit from a C++ class. In addition, multi-language exception handling is not supported. That is, an exception thrown in Objective-C code cannot be caught in C++ code and, conversely, an exception thrown in C++ code cannot be caught in Objective-C code. For more information on exceptions in Objective-C, see “Exception Handling.”
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment