In Sep, 2021, Jupyterlab Desktop App (electron) was released by Mehmet Bektas (github repo).
brew install --cask jupyterlab
In Sep, 2021, Jupyterlab Desktop App (electron) was released by Mehmet Bektas (github repo).
brew install --cask jupyterlab
def implicit_chaining(): | |
""" | |
This is the Python 2 style. Just raising another exception from within an except block. | |
In Python 3 this causes the exceptions to be chained with the message: | |
During handling of the above exception, another exception occurred: | |
Which is usually not correct when just re-raising with a more appropriate type. | |
""" | |
try: |
There exist several DI frameworks / libraries
in the Scala
ecosystem. But the more functional code you write the more you'll realize there's no need to use any of them.
A few of the most claimed benefits are the following:
0 | |
1 | |
2 | |
3 | |
4 | |
5 | |
6 | |
7 | |
8 | |
9 |
import pandas as pd
import numpy as np
from tabulate import tabulate
df = pd.DataFrame(np.random.random((4,3)), columns=['A','B','C'])
print("foo")
return(tabulate(df, headers="keys", tablefmt="orgtbl"))
from flask import Flask | |
from flask.ext import restful | |
from flask.ext.sqlalchemy import SQLAlchemy | |
app = Flask(__name__) | |
app.config.from_object('config') | |
#flask-sqlalchemy | |
db = SQLAlchemy(app) |
from pyspark import SparkContext | |
from pyspark.sql import SQLContext | |
from pyspark.sql import Row, StructField, StructType, StringType, IntegerType | |
sc = SparkContext('spark://master:7077', 'Spark SQL Intro') | |
sqlContext = SQLContext(sc) | |
dividends = sc.textFile("hdfs://master:9000/user/hdfs/NYSE_dividends_A.csv") | |
dividends_parsed = dividends.filter(lambda r: not r.startswith('exchange')).map(lambda r: r.split(',')).map( | |
lambda row: {'exchange': row[0], 'stock_symbol': row[1], 'date': row[2], 'dividends': float(row[3])}) |