Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Python code test - CARTO

What follows is a technical test for this job offer at CARTO:

Build the following and make it run as fast as you possibly can using Python 3 (vanilla). The faster it runs, the more you will impress us!

Your code should:

All of that in the most efficient way you can come up with.

That's it. Make it fly!


This comment has been minimized.

Copy link

@ghost ghost commented May 25, 2017

Hola @jorgesancha, me lo estoy bajando con aws cli, y me pone que son 1.6 GB
Estoy bajándomelo con el comando:
aws s3 cp s3://carto-1000x/data/yellow_tripdata_2016-01.csv .
Hay algún error?


This comment has been minimized.

Copy link

@guillemborrell guillemborrell commented May 25, 2017

1.6 GB también


This comment has been minimized.

Copy link

@midiadiaIT midiadiaIT commented Feb 16, 2018

import time
import pandas as pd

filename = 'data.csv'

t0 = time.time()
n = sum(1 for line in open(filename))
print('Number of lines: ', n)
print('Elapsed time : ', time.time() - t0)

df = pd.read_csv(filename)
t0 = time.time()
print ('Average of tip_amount column: ', df['tip_amount'].sum() / n)
print('Elapsed time : ', time.time() - t0)


This comment has been minimized.

Copy link

@ddelizia ddelizia commented Jun 9, 2018

Can Pandas be used for this test? you said using Python 3 (vanilla), does this means we can use external libraries?


This comment has been minimized.

Copy link

@thinkdifferent thinkdifferent commented May 12, 2019

What (if any) external libraries are allowed? Pandas?


This comment has been minimized.

Copy link

@MiguelBarriosAlvarez MiguelBarriosAlvarez commented Nov 18, 2019

Method 1

import pandas as pd
import time

data = pd.read_csv("2018_Yellow_Taxi_Trip_Data.csv")

t0 = time.time() # Initial Count

row = (data.shape)[0] # Total Rows
sumColumn = (data["tip_amount"].sum())/row # Mean of "tip_amount" column

t1 = time.time() # Final Count
tiempo = round(t1-t0,0) # Total Time

print("Número de filas: \n", row)
print("Suma Total tip_amount: \n", meanColumn)
print("Tiempo total de ejecución: \n", tiempo)

Method 2

import pandas as pd
import time

t0 = time.time() # Initial Count

data = pd.read_csv("2018_Yellow_Taxi_Trip_Data.csv")

tipAmount1 = 0
for index, i in data.iterrows():

tipAmount = i['tip_amount']
tipAmount1 = tipAmount1 + tipAmount # Value of "tipAmount"
n = n + 1 # Rows

meanTipAmount = tipAmount1/n # Mean of "tipAmount"
t1 = time.time() # Final Count
tiempo = round(t1-t0,0) # Total Time

print("Suma Total tip_amount: \n", meanTipAmount)
print("Número de filas: \n", n)
print("Tiempo total de ejecución: \n", tiempo)


This comment has been minimized.

Copy link

@manolinux manolinux commented Feb 7, 2020

It takes more or less same processing time as previous Pandas solutions (Vanilla Python):

import time

file = '/home/manolinux/yellow_tripdata_2016-01.csv'
field = 'tip_amount'
no_more_data_to_process = False
separator = ','

def findnth(string, substring, n):
parts = string.split(substring, n + 1)
if len(parts) <= n + 1:
return -1
return len(string) - len(parts[-1]) - len(substring)

t0 = time.time()

#Get number of field 'tip_amount'
with open(file) as f:
first_line = f.readline().strip()

l = [idx for idx, item in enumerate(first_line) if separator in item]
fieldNumber = 0

for commaPosition in l:
#First position?
if (field + separator) == first_line[0:len(field+separator)]:
if (separator + field) == first_line[-len(separator + field)]:
if (separator + field) == first_line[commaPosition:commaPosition+len(separator+field)]:

countLines = 0
accumulatedTips = 0

#Buffering option of open seems not to do much
with open(file,"r",163840) as f:
#skip first line
for line in f:
posInLine = findnth(line,separator,fieldNumber)
endField = posInLine+(line[posInLine+1:].index(separator))
#print (posInLine,endField,line[posInLine:])

print("Number of lines:",countLines)
print("Average tip:",accumulatedTips/countLines)
print('Elapsed time : ', time.time() - t0)


This comment has been minimized.

Copy link

@boriel boriel commented Aug 10, 2020

Simple (vanila) Python (not even csv is needed):

import sys

SEPARATOR = ','  # Some CSV formats use other separators like tabs or ';'
generator = enumerate(open(sys.argv[1], 'rt', encoding='utf-8'))
index, header = next(generator)
field_index = header.split(SEPARATOR).index('tip_amount')
tip_amount_acc = 0

for i, line in generator:
   tip_amount_acc += float(line.split(SEPARATOR)[field_index])

print("Total lines: {}".format(i + 1))
print("Tip amount average: {}".format(tip_amount_acc / i))

Of note: SEPARATOR can be configurable. The field position is automatically determined (field_index)


This comment has been minimized.

Copy link

@blackrez blackrez commented Jul 17, 2021


It was fun to play with, there is a lot of solution but I like this 2.
I think streaming is the future of data and I hate to download big file.

Solution 1

import csv
import urllib.request
import codecs

url = ""
stream = urllib.request.urlopen(url)
csvfile = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
count = 0
z = 0.0
for line in csvfile:
    z = float(line['tip_amount']) + z
    count = count + 1

avg = z/count

Also I think people should be lazy and use some else computing capacities (I cheated I used boto3 but I don't have the time to rewrite a SDK) and in this case AWS and S3.

Solution 2

import boto3
s3 = boto3.client('s3')

resp = s3.select_object_content(
    Expression="SELECT avg(cast(tip_amount as float)) , count(1) FROM s3object s",
    InputSerialization = {'CSV': {"FileHeaderInfo": "Use", 'FieldDelimiter': ',','RecordDelimiter': '\n'}, 'CompressionType': 'NONE'},
    OutputSerialization = {'CSV': {}},
for event in resp['Payload']:
    if 'Records' in event:
         records = event['Records']['Payload'].decode('utf-8')

I think I will an article on this solutions.

edit I forget the count in solution 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment