Consider the following program:
@safe:
char[] formatInPlace(T)(char[] buf, T value);
void main()
{
char[1024] buffer;
formatInPlace(buffer[], 28.625);
}
Compiling it currently yields no errors:
$ dmd --version | head -n1
DMD64 D Compiler v2.076.0-b1-master-32bb4ed
$ dmd -o- scope1.d
$ echo $?
0
If you were to throw the -dip1000
you would get:
dmd -o- -dip1000 scope1.d
scope1.d(8): Error: reference to local variable buffer assigned to non-scope parameter buf calling scope1.formatInPlace!double.formatInPlace
which is expected, since the compiler can't see the body of the function and conservatively assumes that it will escape the slice.
The only change needed to make the code compile is to annotate the buf
parameter with scope
:
module scope2;
@safe:
// adding `scope` here: v
char[] formatInPlace(T)(scope char[] buf, T value);
void main()
{
char[1024] buffer;
formatInPlace(buffer[], 28.625);
}
$ dmd -o- -dip1000 scope2.d
$ echo $?
0
Of course, that would not be entirely safe because formatInPlace
presumably returns a slice of the buffer it originally received (which the compiler has no way of knowing) which could be easily used to corrupt the stack.
module scope3;
@safe:
char[] formatInPlace(scope char[] buf, double value);
void main()
{
char[1024] buffer;
auto res = formatInPlace(buffer[], 28.625);
global = res; // dangerous - should be disallowed!
}
char[] global;
$ dmd -o- -dip1000 scope3.d
$ echo $?
0
This is countered by indicating in the function signature that the parameter or part of it may be returned:
module scope4;
@safe:
// adding `return` here: v
char[] formatInPlace(return scope char[] buf, double value);
void main()
{
char[1024] buffer;
auto res = formatInPlace(buffer[], 28.625);
global = res; // this is now disallowed
}
char[] global;
dmd -o- -dip1000 scope4.d
scope4.d(11): Error: scope variable res assigned to non-scope global
The thing I like most is that due to scope inference you don't even need to put those attributes manually. Given a real-world case like:
@safe @nogc nothrow:
void escape(char[]);
void main()
{
char[1024] buffer;
import std.stdio;
import std.math : PI;
auto res = formatInPlace(buffer[], byte(-127));
// escape(res); // OK - does not compile
assert( formatInPlace(buffer[], byte(-127)) == "-127" );
assert( formatInPlace(buffer[], ubyte(255)) == "255" );
assert( formatInPlace(buffer[], int.min) == "-2147483648" );
assert( formatInPlace(buffer[], uint.max) == "4294967295" );
assert( formatInPlace(buffer[], long.min) == "-9223372036854775808" );
assert( formatInPlace(buffer[], ulong.max) == "18446744073709551615" );
assert( formatInPlace(buffer[], cast(float)PI) == "3.14159274101257324219" );
assert( formatInPlace(buffer[], cast(double)PI) == "3.14159265358979311600" );
static if (real.sizeof > 8)
assert( formatInPlace(buffer[], cast(real)PI) == "3.14159265358979323851" );
else
assert( formatInPlace(buffer[], cast(real)PI) == "3.14159265358979311600" );
}
private extern(C) @nogc nothrow @trusted
int snprintf(char* s, size_t n, const char* format, ...);
char[] formatInPlace(T)(char[] buf, T value)
{
import std.traits : isIntegral, isFloatingPoint, isSigned, Unqual;
static assert (isIntegral!T || isFloatingPoint!T, T.stringof ~ " is not supported.");
if (!buf.ptr || !buf.length)
return null;
alias U = Unqual!T;
static if (isIntegral!U)
{
static if (isSigned!U)
enum convSpec = "d";
else
enum convSpec = "u";
static if (U.sizeof <= 4)
enum fmt = "%" ~ convSpec;
else static if (U.sizeof == 8)
enum fmt = "%ll" ~ convSpec;
else /* cent / ucent */
static assert (0, T.stringof ~ " is not supported.");
}
else
{
static if (is(U == real))
enum fmt = "%.20Lf";
else
enum fmt = "%.20f";
}
int rc = snprintf(&buf[0], buf.length, fmt, value);
assert(rc > 0 && rc < buf.length);
return buf[0 .. rc];
}
The only thing needed to get the code to compile is to change snprintf
's signature from
int snprintf(char* s, size_t n, const char* format, ...);
int snprintf(scope char* s, size_t n, scope const char* format, ...);
Which is already the case in druntime core.stdc.stdio
.