When an error occurs in a function and there is no expectation that the caller can deal with it,
returning {:error, error}
feels noisy.
Is this better? It makes the assumption that the caller doesnt care. The error is logged for a human operator to see.
def get_some_pods() do
response = Client.get_pods()
case response do
{:ok, pods} -> pods
{:error, error} ->
Logger.error(error)
[]
end
end
Should the caller be given the option to respond?
def get_some_pods() do
response = Client.get_pods()
case response do
{:ok, pods} -> pods
{:error, error} ->
Logger.error(error)
{:error, error}
end
end
def caller() do
case get_some_pods() do
{:ok, pods} -> do_something_with(pods)
_ ->
# caller may not care, but at least it has the option
do_something_with([])
end
end
Not sure how I feel about the noise. I guess it depends on if the caller could find use in the error.
If its something like an intermitting HTTP request that occassionally fails because of a flakey upstream API, why should my caller care? It'll run again in a few minutes or backoff/jitter (use the error). 🤷
It should always be the erroring functions responsibility to do the logging, but in a library, this can be noisy or an incompatible format with the applications logger.
This sucks:
{my: "lovely", json: {log: "messages"}
{my: "lovely", json: {log: "messages"}
{my: "lovely", json: {log: "messages"}
{my: "lovely", json: {log: "messages"}
{my: "lovely", json: {log: "messages"}
HAHAHA IM A DEPENDENT APPLICATION FUCK YOU STDOUT/STDERR
{my: "lovely", json: {log: "messages"}
{my: "lovely", json: {log: "messages"}
Instead, use telemetry to dispatch error
events and
supply a default log_handler/4 for logging that the top level application can optionally use, or at least bind the error events
to its own log formatter.