Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
yaml_config="""
input_features:
-
name: Original_Title
type: text
level: word
encoder: t5
reduce_output: null
-
name: Keyword
type: text
level: word
tied_weights: Original_Title
encoder: t5
reduce_output: null
output_features:
-
name: Optimized_Title
type: sequence
level: word
decoder: generator
"""
with open("config.yaml", "w") as f:
f.write(yaml_config)
@marcusbianco
Copy link

marcusbianco commented Dec 22, 2020

@mithila2806
Copy link

mithila2806 commented Dec 22, 2020

Yes. I did notice that too - the two files are different.

That's the reason why I changed it in my code and used Hamlet's updated YAML file:
https://gist.githubusercontent.com/hamletbatista/e0498d79dfef5350cec8171d8e4a1f03/raw/e012a235522773fca9d3543907193172232bb44f/ludwig_t5_config.yaml
(I have pasted the screenshots above)

My assumption is @shyamcody added these lines of code
from google.colab import files files.upload()

and uploaded the correct YAML file from his local folder/drive.

Regards,
Mithila K

@nearchoskatsanikakis
Copy link

nearchoskatsanikakis commented Dec 27, 2020

Hi @mithila2806 I am also stuck at the same part as you with the error "ValueError: The following keyword arguments are not supported by this model: ['token_type_ids'].". Do you happen to have any update on the issue?

@mithila2806
Copy link

mithila2806 commented Dec 27, 2020

hey!

I didn't progress after that. Wil update here if at all I get lucky.

Thanks for your update 😊

Regards,
Mithila K

@nearchoskatsanikakis
Copy link

nearchoskatsanikakis commented Dec 28, 2020

Hi again @mithila2806, the way I managed to avoid this error was by finding the python file were this error occured and comment out some lines. If I remember correctly the lines were 348,349,350,351. After doing that I run the model again end it worked. If you want me to explain anything else just hit me up!

@mithila2806
Copy link

mithila2806 commented Dec 31, 2020

hey @nearchoskatsanikakis

I was able to successfully run the code. I didn't do any changes. I simply ran the code and it worked:)
I believe there were some compatibility issues, which were fixed.

Happy coding 👍

Regards,
Mithila K

@hamletbatista
Copy link
Author

hamletbatista commented Dec 31, 2020

Great job @mithila2806 @nearchoskatsanikakis 👏🏽👏🏽👏🏽

I'm glad you got it to work!

@mithila2806
Copy link

mithila2806 commented Dec 31, 2020

Hi Hamlet

It wouldn't have been possible without you.
Can't thank you enough.
Regards,
Mithila

@thebimbolawal
Copy link

thebimbolawal commented Dec 31, 2020

Hi Hamlet,

I'm getting issues when get to:

Import panda as pd

And

df = pd.read.csv("dAta.csv")

df.head ()

In the "data.csv" is that where I'm going to import my data I want to optimize?

And 2) How do I upload my data into it?

And after it has optimize the tag how do I download it into a spreadsheet for analysis?

@pgrandinetti
Copy link

pgrandinetti commented Feb 7, 2021

@tthebimbolawal
When you dowload the file using the !wget instructions provided by Hamlet, the resulting csv will be named hootsuite_titles.csv, and it will be in the content folder, if you are using google-colab. See screenshot.
So, to load it in pandas you can do pd.read_csv('hootsuite_titles.csv')
I hope that answers your question.
Screenshot from 2021-02-07 17-39-44

@asktienlin
Copy link

asktienlin commented Apr 16, 2021

Hello all,

Thank you all for the information.
I have followed all steps and tried to make it work.

When I run the following code,
!ludwig predict --dataset hootsuite_titles_to_optimize.csv --model_path results/experiment_run/model/

It returns this error
FileNotFoundError: [Errno 2] No such file or directory: 'results/experiment_run/model/model_hyperparameters.json'

Could you all tell me what model path I should use to make it work?
Thank you a lot

@Suvakanta8
Copy link

Suvakanta8 commented Sep 8, 2021

Hii while I am trying to replace hootsuite_titles.csv dataset with my website dataset I am getting a value error like this:
ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(128, 32, 512), (128, 224, 512)]
please reply.......

@marcusbianco
Copy link

marcusbianco commented Sep 8, 2021

@Suvakanta8
Copy link

Suvakanta8 commented Sep 9, 2021

Thanks @marcusbianco can you please explain me how to prepare a dataset properly for training and the optimized score column in hootsuit dataset what is that ? Is that necessary for training?

@Suvakanta8
Copy link

Suvakanta8 commented Sep 9, 2021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment