There are several changes required in order for MetaMaps to be accepted into the MetaGame codebase. While the most important tasks involve QA and formatting. This is also a good time to augment the existing implementation with changes we are planning in the future.
The Maps
table needs to be regenerated. It would be a good idea to reset all diffs in Hasura and update it to use Player.id instead of an Ethereum address. The address that is pulled from 3box should be used to find the player's id on MetaGame.
-
Reset all diffs in Hasura.
-
Have maps associated with
Player.id
instead of an Ethereum address. -
Map data should be
json
instead oftext
.
The current implementation of Redux needs to be updated to: https://redux-toolkit.js.org/. There are general refactors required such as needing more efficient mappings for component renders. This would be pretty important for larger map datasets.
-
Refactor to redux toolkit.
-
Improve component renderings such as not have components inside redux stores.
We should be utilizing the @metafam/ds component libraries. We may want to also implement new context specific components for example the shapes and context menus used in metamaps.
-
Update and migrate components to
@metafam/ds
. -
Copy theme elements from other MetaGame's designs. Can use Figma as a reference: https://www.figma.com/file/RWVfMGvXDAX74fQexze8uO/Meta-Game-Copy-Copy?node-id=58%3A2
-
Improve overall code structure to
jsx/tsx
.
As per comments from @heterotic, we should have a z component. Where one can toggle the z index of the map. This would be a +/- button with an input at the top of the map.
-
Create a
z
layer for maps. -
Maps can be toggled with a
-/+
button at the top with a number input. -
Map renders are based on the
z
layer.
The 2nd class in the map parsing engine starts with a function that takes the schema stored in the JSON string for a particular data source and uses that schema to construct a query as indicated by the data source type to extract the necessary information for use in instancing the view design tools of the 3rd class. My experience is that table definitions are often sparse and incomplete, even things like primary keys can be neglected in database design, and foreign keys are often unidentifiable without testing values across tables, although sometimes column names give some indication of a foreign key, even that cannot be easily trusted in any concrete sense... in my opinion, it is better to obtain the following items to further inform a user of the likelihood of the availability for a join (and also a method of determining normalization constraints for bad table design) for each 'table/(array)' in a data source, we need at least: a tableName, the number of potential fields/columns, and the number of rows/keyValueSets, in addition, for each 'field/(column)' in each 'table/(array)' of a data source we need, at a minimum: a Count of NotNulls (or nulls, whichever is easier based on the source), a Count of Unique values, and any field specific schema data available from the data source schema like data type, etc... all of this information should be loaded into a final JSON string and sent to the 3rd class, but storing it in a Map table in hasura for each instance of a view would not be a bad idea, as these queries are not likely to run in a user friendly amount of time, and could potentially take several minutes (or even longer) to resolve in real compute time. This class should be re-run for a map view whenever there is an update supplied by the user to the view which adds a new data source, or as a refresh option selectable by the user, and/or run by default to check for changes to the existing schema prior to any re-rendering of the map after restoring a saved view to check for renderer breaking changes to the schema.