The Go standard library does not have nullable values for basic types like strings and integers, which can be confusing for someone coming from weak or dynamically typed languages like Ruby, Python, or JavaScript. Here's a comparison of how Ruby sets variable defaults compared to Go:
class User
def name
@name
end
def zip_code
@zip_code
end
end
puts User.new.name # => nil
puts User.new.zip_code # => nil
type User struct {
Name string
ZipCode int
}
fmt.Println(User{}.Name) // => ""
fmt.Println(User{}.ZipCode) // => 0
In Ruby, unitialized variables have the value nil
but in Go it depends on the type of the variable. If the type is a string, the default value is an empty string. If it's an int, the default value is 0
. In Go these are called the "zero value" of a type. This can seem awkward if you're used to working with nil
and null
, e.g. what if we're writing our User
struct to a database?
type User struct {
AccountID int
Name string
}
u := User{}
db.NamedQuery("INSERT INTO users (AccountID, Name) VALUES (:accountID, :name)", u)
This will attempt to create a database record with an AccountID of 0
and a Name of ""
, although we likely wanted them to be null
in our database. One solution is the null package, a third-party library that can wrap any value and allow it to be null.
type User struct {
AccountID null.Int
Name null.String
}
u := User{}
db.NamedQuery("INSERT INTO users (AccountID, Name) VALUES (:accountID, :name)", u)
This will correctly insert our user record with a null AccountID and name, but it introduces other complications and it is worth taking a step backwards and asking: why did the designers of Go leave out null in the first place?
One of the main benefits of a strongly typed language is that the compiler can do a lot more to check your code for correctness. When you introduce a new type null.String
that is either null
OR a string
, we're obfuscating our code to Go's type checker and taking on the work ourselves to ensure type correctness. Wherever we used to be able to assume the type of a variable, we now have to check:
func printNullableName(name null.String) {
if name.Valid {
stringName := strings.ToUpper(name.ValueOrZero())
fmt.Println("Name: ", stringName)
}
}
func printName(name string) {
name = strings.ToUpper(name)
fmt.Println("Name: ", name)
}
Overall, the downsides of using the null
package are:
- It introduces a new package dependency that can creep into your entire code base
- It introduces special cases that your code now has to handle
- Those special cases break code composition. See: Examples A and B.
For a strongly-typed language like Go, it's best to avoid using the null package or at least limit the number of places it's used. If we revisit our database example:
type User struct {
AccountID int
Name string
}
func (u *User) Save() {
db.Exec("INSERT INTO users (AccountID, Name) VALUES (?,?)", NewNullInt(u.AccountID), NewNullString(u.Name))
}
func NewNullString(s string) sql.NullString {
if len(s) == 0 {
return sql.NullString{}
}
return sql.NullString{String: s, Valid: true}
}
func NewNullInt(i int) sql.Int {
if i == 0 {
return sql.NullInt{}
}
return sql.NullInt{Int: s, Valid: true}
}
We're still using the null package, but it's limited to our database/ORM layer where we need to translate from our zero-values to null values. The User
struct does not have null values, so we can pass it around our code base without introducing a new dependency and types.
Here's another solution that doesn't even use the null package.
- Avoid using the null package in Go, it introduces special cases and removes the benefits of Go's type checker.
- If you have to use it, abstract the nullable values from your code as much as possible in favor of sensible defaults (the zero-values). Do the integers or strings in your struct need to be nullable, or are
0
and""
sensible default values? Doesnull
mean something uniquely different than0
or""
? - For databases that have nullable columns, the ORM layer is a good place to abstract away null values and set defaults for the rest of your application code.