It's easy to abuse the BLAS interface functions with "nice" Julia objects. The following will usually cause a segfault:
using Base.LinAlg.BLAS
x = ones(10)
scal!(10000,10.0,x,1)
%mypath this is matlab's path without toolboxes | |
function mypath | |
C = textscan(path,'%s','Delimiter',pathsep); | |
C = C{1}; | |
mr = matlabroot; | |
mrlen = length(mr); | |
function printer(x) |
#!/usr/bin/env python | |
import smtplib | |
import json | |
from email.mime.text import MIMEText | |
message = '''Hello {santa}! | |
It's the secret santa elf here! You have been allocated to be secret | |
santa for {name} ({email}). Remember you only have a budget of $50, so |
#include <iostream> | |
#include <cstdio> | |
using std::cout; | |
using std::endl; | |
__global__ | |
void print_a(int* a, const int n) { | |
for (int i=0; i<n; i++) { | |
printf("a[%d] = %d\n",i,a[i]); |
kernel_1: thread 0 writing 0 to shared memory. | |
kernel_1: thread 1 writing 1 to shared memory. | |
kernel_1: thread 2 writing 2 to shared memory. | |
kernel_1: thread 3 writing 3 to shared memory. | |
kernel_1: thread 4 writing 4 to shared memory. | |
kernel_2: thread 0 reading 0 from shared memory. | |
kernel_2: thread 1 reading 1 from shared memory. | |
kernel_2: thread 2 reading 2 from shared memory. | |
kernel_2: thread 3 reading 3 from shared memory. | |
kernel_2: thread 4 reading 4 from shared memory. |
/usr/local/cuda/bin/..//include/thrust/detail/function.h(60): error: calling a __device__ function("operator()") from a __host__ __device__ function("operator()") is not allowed
detected during:
instantiation of "Result thrust::detail::wrapped_function<Function, Result>::operator()(const Argument &) const [with Function=<unnamed>::RngInit, Result=void, Argument=signed int]"
/usr/local/cuda/bin/..//include/thrust/system/detail/sequential/for_each.h(83): here
instantiation of "InputIterator thrust::system::detail::sequential::for_each_n(thrust::system::detail::sequential::execution_policy<DerivedPolicy> &, InputIterator, Size, UnaryFunction) [with DerivedPolicy=thrust::detail::seq_t, InputIterator=thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, Size=signed long, UnaryFunction=<unnamed>::RngInit]"
# 2017-04-06 | |
# | |
# The sequence of commands below can be used to configure a Dell XPS 15 9560 laptop | |
# for CUDA development and testing. No special care is taken for battery life. In | |
# the end, the Nvidia GPU is used to drive the display and may run CUDA executables. | |
# | |
# update ubuntu | |
sudo apt update | |
sudo apt upgrade |
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t
Peter Naur, 1985
(copied from http://alistair.cockburn.us/ASD+book+extract%3A+%22Naur,+Ehn,+Musashi%22)
The present discussion is a contribution to the understanding of what programming is. It suggests that programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.