Skip to content

Instantly share code, notes, and snippets.

@bradfa
Last active June 13, 2017 16:07
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bradfa/2a94fc14ce2d03f11ff6e4020b169b57 to your computer and use it in GitHub Desktop.
Save bradfa/2a94fc14ce2d03f11ff6e4020b169b57 to your computer and use it in GitHub Desktop.
My notes about scaleway

Scaleway X64-60G instance built core-image-minimal in 46 minutes.

Scaleway C2S instance built core-image-minimal in 167 minutes.

Scaleway VC1S instance didn't finish building core-image-minimal in 329 minutes but I got bored so stopped it.

My Lenovo T420s with Core i5-2540M built core-image-minimal in 96 minutes.

My Dell Precision Tower 5810 with Xeon E5-2687W v4 bulit core-image-minimal in 25 minutes. With some kernel changes and proper cpufreq governor settings on my Dell 5810, this duration has been reduced to 21 minutes.

Test sequence on Debian Jessie:

sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 libsdl1.2-dev xterm
git clone git://git.yoctoproject.org/poky
cd poky
git checkout -b morty origin/morty
source oe-init-build-env
vim conf/local.conf

Then edit the local.conf to set MACHINE=beaglebone and set PACAKGE_CLASSES=package_ipk. Save and exit.

Fetch all the sources, so as to avoid network delays impacting build time (this will end up caching the parsing of recipes somewhat, but that's kind of unavoidable). Set BB_NUMBER_THREADS high as all test machines have fast network connections, this will allow more parallel downloads:

BB_NUMBER_THREADS=16 bitbake -c fetchall core-image-minimal

And then do a timed build with nothing else going on on the machine using default values for all unmodified variables:

time bitbake core-image-minimal

Testing was all done on morty branch, git SHA 6a1f33cc40bfac33cf030fe41e1a8efd1e5fad6f

Their X64-60GB "10 core cloud server" instances are Intel Xeon D-1531 based. These seem to require you to provision at least 400 GB of "volumes" in order to allow you to boot initially, which seems silly as it ends up spread across 4 volumes which you then need to manually do something with (like put into a JBOD or LVM or whatnot).

Their C2S "4 core bare metal server" instance are Intel(R) Atom(TM) CPU C2550 @ 2.40GHz based.

Their VC1S "2 core cloud server" instances are Intel Atom C2750 based.

  • Spinning up a new server is noticably slower than AWS's EC2.
  • Powering down a server can take minutes and seems to require more than just doing a clean shutdown in the server itself, you have to interact with their website or use their API to really shut it down from a billing point of view.
  • You can't easily take a volume from one server, detach it, and then attach it to a new server. Although you can move volumes from one server that already exists to another that already exists.
  • On their website console, you can watch the entire serial boot process including the PXE booting. They seem to have a heavily customized initrd which is somewhat generic looking that then hands off to the Linux distro of your choice. The kernel is probably a common one, too.
  • Their default ssh server seems to allow both public key and password auth. EC2 only allows public key auth.
@rigred
Copy link

rigred commented Jun 13, 2017

Cloning from github repo's is also insanely slow

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment