Skip to content

Instantly share code, notes, and snippets.

org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node [rel#16:Subset#2.LOGICAL.[]] could not be implemented; planner state:
Root: rel#16:Subset#2.LOGICAL.[]
Original rel:
Sets:
Set#0, type: RecordType(INTEGER id, VARCHAR(10) productId, INTEGER units, TIMESTAMP(0) rowtime)
rel#11:Subset#0.NONE.[], best=null, importance=0.7290000000000001
rel#3:LogicalTableScan.NONE.[](table=[KAFKA, ORDERS]), rowcount=100.0, cumulative cost={inf}
Set#1, type: RecordType(VARCHAR(10) productId, INTEGER units)
@milinda
milinda / CALCITE-968
Created November 20, 2015 21:37
Calcite query optimizer error for stream-to-realtion joins
Root: rel#25:Subset#6.ENUMERABLE.[]
Original rel:
Sets:
Set#0, type: RecordType(TIMESTAMP(0) ROWTIME, INTEGER ID, VARCHAR(10) PRODUCT, INTEGER UNITS)
rel#8:Subset#0.NONE.[0], best=null, importance=0.531441
rel#0:LogicalTableScan.NONE.[[0]](table=[STREAMJOINS, ORDERS]), rowcount=100.0, cumulative cost={inf}
rel#152:Subset#0.BINDABLE.[0], best=rel#151, importance=0.4304672100000001
rel#151:BindableTableScan.BINDABLE.[[0]](table=[STREAMJOINS, ORDERS]), rowcount=100.0, cumulative cost={1.0 rows, 1.01 cpu, 0.0 io}
rel#160:Subset#0.ENUMERABLE.[0], best=rel#592, importance=0.4782969000000001
[svn-remote "svn"]
url = svn+ssh://mpathira@bitternut.cs.indiana.edu/home/dquob/svn-repo/slosh/cloudpipe/trunk
fetch = :refs/remotes/git-svn
/**
* Validate the provided whirr configuration, copy the configuration(including byon node file) to scratch working
* directory and returns the file path to the whirr configuration.
* @param appDeploymentDesc GFac application deployment description
* @return Whirr configuration file path
*/
private String getWhirrConfigurationFile(HadoopApplicationDeploymentDescriptionType appDeploymentDesc) throws ProviderException, IOException {
if(appDeploymentDesc.getWhirrConfigurationFile() != null){
File whirrConfig = new File(appDeploymentDesc.getWhirrConfigurationFile());
if(!whirrConfig.exists()){
scrapy crawl scrapy
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy_sample.items import ScrapySampleItem
class ScrapyOrgSpider(BaseSpider):
name = "scrapy"
allowed_domains = ["scrapy.org"]
start_urls = ["http://blog.scrapy.org/"]
from scrapy.item import Item, Field
class ScrapySampleItem(Item):
title = Field()
link = Field()
content = Field()
scrapy startproject scrapy_sample
<xs:element name="subtract">
<xs:complexType>
<xs:sequence>
<xs:element minOccurs="0" name="a" type="xs:int"/>
<xs:element minOccurs="0" name="b" type="xs:int"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<bpel:copy>
<bpel:from>
<bpel:literal xml:space="preserve">
<ns:subtract xmlns:ns="http://cts.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></ns:subtract>
</bpel:literal>
</bpel:from>
<bpel:to variable="SubtractPLRequest" part="parameters"></bpel:to>
</bpel:copy>