I hereby claim:
- I am davidyell on github.
- I am neon1024 (https://keybase.io/neon1024) on keybase.
- I have a public key whose fingerprint is F6A0 7DCC FA0C 0920 9AF8 D350 97CA D9D1 5E9A 25FD
To claim this, I am signing this object:
<?php | |
use PHPUnit\Framework\TestCase; | |
class ArraySplitTest extends TestCase | |
{ | |
/** | |
* Array of data to test with, items which are slot_type 1 are regular, and slot_type 2 split up groups | |
* | |
* @var array[] | |
*/ |
I hereby claim:
To claim this, I am signing this object:
<?php | |
require 'vendor/autoload.php'; | |
class TelesalesWindow | |
{ | |
private $now; | |
/** | |
* TelesalesWindow constructor. | |
* |
<?php | |
header('Content-Type:text/text'); | |
// Disable the Nginx output buffering | |
header('X-Accel-Buffering: no'); | |
ob_implicit_flush(true); | |
ob_end_flush(); | |
ini_set('output_buffering', false); |
I hereby claim:
To claim this, I am signing this object:
<?php | |
namespace Neon1024\ArrayReduce; | |
class ArrayReduce | |
{ | |
/** | |
* Reduce an input array by removing fields which are present in the second array argument | |
* | |
* @param array $target The array of data to reduce | |
* @param array $reduceBy Array of array keys to be removed from the target array |
Neon1024: What is the accepted way to get lots of changes which are dependant into a repo? | |
Neon1024: I’ve got an open PR | |
Neon1024: I now need to add another feature which relies on that PR | |
Neon1024: Do I branch from my PR branch | |
Neon1024: Is it bad etiquette to just fork the repo, make all my changes and then offer a huge pr? | |
Neon1024: Well, like, not bad etiquette, but I mean, like it’d be super hard to review right? | |
Neon1024: But would dependant Pr’s be as confusing? | |
Neon1024: I want to solve this, https://github.com/UseMuffin/Webservice/issues/41 | |
Neon1024: But I need this merged, https://github.com/UseMuffin/Webservice/pull/38 | |
Neon1024: Which in turn has to have this merged before it, https://github.com/UseMuffin/Webservice/pull/39 |
#Preface These are a number of questions I've distilled over my first week of using Elasticsearch.
When ingesting data into Elasticsearch should you be using a single index for each application and then use types within that index to break up the data?
My current setup is like this, with a project
index, users
and leagues
types within that index.
<?php | |
// Abstract the google named variables to that if they change we can still read the values | |
$coords = []; | |
foreach ($bounds as $key => $value) { | |
$coords[] = $value; | |
} | |
$first = get_object_vars($coords[0]); | |
$second = get_object_vars($coords[1]); |
<?php | |
use Phinx\Migration\AbstractMigration; | |
use Faker\Factory as Faker; | |
use Faker\ORM\CakePHP\Populator; | |
class Seed extends AbstractMigration | |
{ | |
public function up() { | |
$faker = Faker::create(); | |
$populator = new Populator($faker); |