Skip to content

Instantly share code, notes, and snippets.

Avatar
👨‍🔬

Pedro Rodriguez EntilZha

👨‍🔬
View GitHub Profile
View Jupyter error
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/home/user/.local/lib/python3.5/site-packages/IPython/core/formatters.py in __call__(self, obj)
697 type_pprinters=self.type_printers,
698 deferred_pprinters=self.deferred_printers)
--> 699 printer.pretty(obj)
700 printer.flush()
701 return stream.getvalue()
/home/user/.local/lib/python3.5/site-packages/IPython/lib/pretty.py in pretty(self, obj)
View MinByColumn.scala
import org.apache.spark.sql.expressions.MutableAggregationBuffer
import org.apache.spark.sql.expressions.UserDefinedAggregateFunction
import org.apache.spark.sql.Row
import sqlContext.implicits._
import org.apache.spark.sql.types.{StructType, StructField, DataType, ByteType, ShortType, IntegerType, LongType, FloatType, DoubleType, DecimalType, StringType, BinaryType, BooleanType, TimestampType, DateType, ArrayType}
class MinBy(valueType: DataType, minType: DataType) extends UserDefinedAggregateFunction {
def inputSchema: StructType = StructType(StructField("value", valueType) :: StructField("minCol", minType) :: Nil)
def bufferSchema: StructType = StructType(StructField("value", valueType) :: StructField("minCol", minType) :: Nil)
View gist:9c585662ef7cda820c311d1c7eb16e42
[info] Constructing Javadoc information...
[error] /Users/pedro/Documents/Code/spark/core/target/java/org/apache/spark/serializer/SerializationDebugger.java:159: error: cannot find symbol
[error] static private org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection reflect () { throw new RuntimeException(); }
[error] ^
[error] symbol: class ObjectStreamClassReflection
[error] location: class SerializationDebugger
[error] /Users/pedro/Documents/Code/spark/core/target/java/org/apache/spark/serializer/SerializationDebugger.java:22: error: class SerializationDebugger is already defined in package org.apache.spark.serializer
[error] static private class SerializationDebugger {
[error] ^
[error] /Users/pedro/Documents/Code/spark/mllib/target/java/org/apache/spark/ml/PipelineStage.java:6: error: Logging is not public in org.apache.spark.internal; cannot be accessed from outside package
View matplotlib log
This file has been truncated, but you can view the full file.
cd CO% pedro on terminus.local in ~
$ cd Code/matplotlib
pedro on terminus.local in ~/Code/matplotlib (v2.x)
$ python setup.py build
============================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [1.5.1+405.gb6e0b9b]
View 101471-3.txt
In addition to The Black Hermit and Wizard of the Crow, this author is better known for a novel in which Mugo betrays the revolutionary Kihika, as well as another in which Munira burns down Wanjas brothel.
View errors
$ cargo build
Compiling max_flow v0.1.0 (file:///Users/pedro/Documents/Code/max_flow)
src/lib.rs:128:56: 128:65 error: no method named `collect` found for type `BfsIterator<'_, V, i64, fn(_, i64, _) {flow_predicate}>` in the current scope
src/lib.rs:128 BfsIterator::new(self, source, flow_predicate).collect()
^~~~~~~~~
src/lib.rs:128:56: 128:65 note: the method `collect` exists but the following trait bounds were not satisfied: `BfsIterator<'_, V, i64, fn(_, i64, _) {flow_predicate}> : core::iter::Iterator`
src/lib.rs:128:56: 128:65 help: items from traits can only be used if the trait is implemented and in scope; the following trait defines an item `collect`, perhaps you need to implement it:
src/lib.rs:128:56: 128:65 help: candidate #1: `core::iter::Iterator`
error: aborting due to previous error
Could not compile `max_flow`.
View lib.rs
pub struct BfsIterator<'a, V: 'a + Copy, E: 'a + Copy> {
queue: VecDeque<VertexId>,
graph: &'a Graph<V, E>,
distances: Vec<u64>,
parents: Vec<VertexId>,
predicate: &'a Fn(V, E, V) -> bool
}
impl<'a, V: Copy, E: Copy> Graph<V, E> {
pub fn bfs_iter(&'a self, source: VertexId) -> BfsIterator<V, E> {
View Cargo.toml
[package]
name = "ffitest"
version = "0.1.0"
authors = ["Pedro Rodriguez <ski.rodriguez@gmail.com>"]
[dependencies]
[lib]
name = "ffitest"
crate-type = ["dylib"]
View graph.rs
use std::collections::HashMap;
#[derive(Debug)]
pub struct Graph<V: Copy, E: Copy> {
pub vertexes: HashMap<usize, Vertex<V>>,
pub edges: HashMap<(usize, usize), E>,
adjacency_matrix: Vec<Vec<bool>>
}
#[derive(Debug)]
View RDD.cpp
#include <iostream>
#include <vector>
using namespace std;
template <typename A>
class RDD {
public:
vector<A> sequence;
RDD(vector<A> sequence): sequence(sequence) {}