Visit my blog or connect with me on Twitter
git init
or
| from math import sqrt as sq | |
| class Line(object): | |
| def __init__(self, coor1=(0,0), coor2=(0,0)): | |
| self.coor1 = coor1 | |
| self.coor2 = coor2 | |
| #THIS DEF DISTANCE TAKES THE VALUE OF COOR1 and COOR2 AND | |
| #SOLVE FOR THE DISTANCE OF THE 2 GIVEN POINTS AND PRINT A STATEMENT |
| # 10_basic.py | |
| # 15_make_soup.py | |
| # 20_search.py | |
| # 25_navigation.py | |
| # 30_edit.py | |
| # 40_encoding.py | |
| # 50_parse_only_part.py |
Visit my blog or connect with me on Twitter
git init
or
Note: I'm currently taking a break from this course to focus on my studies so I can finally graduate
This chapter introduces Spark’s core abstraction for working with data, the Resilient Distributed Dataset (RDD). An RDD is simply a distributed collection of elements. In Spark all work is expressed as either creating new RDDs, transforming existing RDDs, or calling operations on RDDs to compute a result. Under the hood, Spark automatically distributes the data contained in RDDs across your cluster and parallelizes the operations you perform on them.
Both Data Scientists and Engineers should read this chapter, as RDDs are the core concept in Spark. We highly recommend that you try some of these examples in an interactive shell (see Introduction to Spark’s Python and Scala Shells). In addition, all code in this chapter is available in the book’s GitHub repository.
| while estado==1: | |
| opcion=input('Elija una opcion del menu ') | |
| if opcion==1: | |
| neto=0.0 | |
| pi=shelve.open("db_productos.dat") | |
| while code_bar!=0: | |
| h=0 | |
| code_bar=input('Digite el codigo de barras o presione 0 para terminar su cuenta ') | |
| if code_bar!=0: |
| # In[5]: | |
| x = [[ 795. , 501.0292887 ], | |
| [ 794.97154472, 501. ], | |
| [ 794.96078431, 500. ], | |
| [ 795. , 499.09090909], | |
| [ 795.03921569, 500. ], | |
| [ 795.02777778, 501. ], | |
| [ 795. , 501.0292887 ]] |