Skip to content

Instantly share code, notes, and snippets.

@varocarbas
Last active May 13, 2023 07:58
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save varocarbas/3e9c52fc4131b1c6f4b6bc0ef6b701ea to your computer and use it in GitHub Desktop.
Save varocarbas/3e9c52fc4131b1c6f4b6bc0ef6b701ea to your computer and use it in GitHub Desktop.
Database management (II): main implementations -- accessory_java

Database management (II): main implementations -- accessory_java

Introduction

This article discusses the most important parts of the DB-related code of accessory_java (main repository). The code of another Java library based on similar ideas, ib (main repository), is also used as a supporting resource.

accessory_java

I started developing accessory_java while working on a personal project. It was meant to be a container of generic resources which I could eventually use in other applications. Basically, a library including not just useful methods and variables, but also a solid starting point to face virtually any project in Java. Or, in other words, a way to adapt Java to my programming approach, to help me easily overcome the language peculiarities. That initial code has kept growing and evolving until reaching the current stage, which I consider mature, comprehensive and reliable enough.

Main features of accessory_java:

  • Built from scratch, by relying on stable functionalities and with the minimum amount of external dependencies possible. At the moment, there is only one external dependency: the MySQL connector. And it can even be ignored in case of not using the database (DB) resources.
  • Including virtually no comments or documentation, but with an overall clear structure and quite rigid rules and conventions which are applied in a very systematic and consistent way. Actually, this article and potential future ones might be gradually compensating the aforementioned lacks.
  • Main priorities (ordered): friendliness, safety (e.g., default validity checks everywhere or no uncaught exceptions), efficiency/speed, scalability/adaptability.
  • Theoretically technology agnostic, although currently mostly focused on MySQL and Linux.

Before using any of the resources of this library, one of the start overloads in the accessory._ini class has to be called, as shown in the code below.

String app_name = samples.APP;
boolean includes_legacy = false; //When true, it looks for a package called "legacy" which might 
                                 //be missing anyway.

//------
//dbs_setup includes the setup to be used by default in all the DBs. 
//It can be populated by using one of the parent_ini_db.get_setup_vals methods or via adding valid
//types.CONFIG_DB_SETUP_[*]-value pairs. Any of those values can be overwritten by the specific setups 
//in the _ini_db class, which are included when adding a new DB through the populate_db method called 
//from populate_all_dbs.

String db_name = samples.PERFECT_NAME;
String username = samples.PERFECT_USERNAME;
String password = samples.PERFECT_PASSWORD;
        
//--- These two variables work together: if encrypt is true, user has to be non-null and the DB 
//credentials will be retrieved from the corresponding encrypted file.
boolean encrypt = true;
String user = samples.USER; //This is the crypto ID associated with the given encryption information, 
                            //which can be generated by calling the method db.encrypt_credentials.
//---
        
//If encrypt is true, a set of encrypted credentials located in a specific file is expected.
//These credentials can be generated by calling db.encrypt_credentials/db.update_credentials. 
//But that call or other related actions (e.g., changing the directory for credentials) can't 
//be performed before _ini.start is called. That is, if delay_encryption is set to false, 
//dbs_setup would be updated with previously-generated credentials. Alternatively, delay_encryption 
//can be true, dbs_setup would include a meaningless placeholder and the credentials would be updated 
//after _ini.start is called. 
boolean delay_encryption = true;
        
HashMap<String, Object> dbs_setup = null;
        
if (samples.USE_DB)
{
    String setup = null; //This variable can be null (i.e., default setup) or equal to the given DB
                         //(i.e., corresponding types.CONFIG_[*] constant). 
    String host = null; //A null value indicates that the default host will be used 
                        //(e.g., "localhost" in MySQL).
            
    if (encrypt && !delay_encryption) 
    {
        dbs_setup = parent_ini_db.get_setup_vals(db_name, setup, user, host, encrypt);
    }
    else dbs_setup = parent_ini_db.get_setup_vals(db_name, setup, username, password, host);
}
//------

_ini.start(app_name, includes_legacy, dbs_setup);
        
if (encrypt && delay_encryption)
{
    paths.update_dir(paths.DIR_CRYPTO, samples.PERFECT_LOCATION);
    paths.update_dir(paths.DIR_CREDENTIALS, samples.PERFECT_LOCATION);

    db.update_credentials(db_crypto.SOURCE, user, username, password);
}

permalink

public abstract class samples
{
    public static final String APP = "whatevs";
    public static final String ID = "whichevs";
    public static final String USER = "whoevs";
    public static final String PLACEHOLDER = "wherevs";
    
    //Set this variable to true only if there is a valid DB setup in place 
    //(i.e., MySQL connector, valid DB name and valid credentials).
    public static final boolean USE_DB = false;
    
    //--- I would input my own values where these constants are used if I were you.
    public static final String PERFECT_NAME = PLACEHOLDER;
    public static final String PERFECT_LOCATION = PLACEHOLDER;
    public static final String PERFECT_USERNAME = PLACEHOLDER;
    public static final String PERFECT_PASSWORD = PLACEHOLDER;
    //---

    public static void print_error(String sample_id_) { print_message(sample_id_, null, false); }
    
    public static void print_message(String sample_id_, String message_, boolean is_ok_)
    {
        String message = sample_id_ + misc.SEPARATOR_CONTENT;
        
        if (is_ok_) message += message_; 
        else message += "ERROR";
        
        generic.to_screen(message);
    }
    
    public static void print_messages(String sample_id_, String[] messages_)    
    {
        String message = sample_id_;

        for (String message2: messages_) { message += misc.SEPARATOR_CONTENT + message2; }
        
        generic.to_screen(message);
    }
    
    public static void print_end() { generic.to_screen("sample code completed successfully"); }
}

permalink

Previous articles about accessory_java:

ib

A Java library easing the automated communication with Interactive Brokers (IB). It relies on and emulates the defining ideas of accessory_java. An analysis of this library is beyond the scope of the present article and its code will be merely used to support the explanations about accessory_java.

Preliminary ideas

This article is supposed to be the continuation of the previous one, which describes the general approach to DB management, but also of all the other articles about accessory_java providing a rather clear picture of these libraries and their (conceptual) peculiarities. It wouldn't make too much sense to keep clarifying what readers should already know. In any case, there is an issue, precisely discussed in the last article, which does indeed deserve some attention: types, keys and values within the DB context.

Due to the constant/modifiable essence of types/keys in these libraries, I intuitively associate them with a twofold reality, equivalent to the key-value concept, where the unmodifiable side, the type (key in the key-value context) is mostly used for identification of the given value or key here. Although this is applicable to all the accessory_java keys, their low variability makes them unsuitable for the DB case because of its most basic requirements (i.e., any possible column or table name has to be supported). This is where accessory.config and its global collection _info come into picture. There could be doubts about the adequacy of this specific option and a different alternative could easily be chosen, but a storage on those lines will always be needed. That is, the default support of types/keys will always need to be complemented, either by taking advantage of the CONFIG-types peculiarities or by relying on any other approach. This fact should be evident to anyone thinking about the situation for a brief moment or taking a quick look at the actual implementation. There are thus types (e.g., sources/fields), keys (not being of much use in this context) and values (e.g., tables/cols).

In summary, when referring to fields/cols or sources/tables, expressions like "type vs. key" should be understood in a wider sense as constant/ID/key vs. variable/value, beyond the strict meaning of type vs. key in the accessory_java sense. Purely speaking, types/keys are here really referring to types/values (or keys/values in the key-value sense). But who cares, right? Unless... Are you a purist?! I mean... I don't like talking bad about groups of random people, but we all have heard stories. I am definitely not a puristphobe. Some of my best friends are purists and they know that I despise them publicly just to fit in. Additionally, this is not about the hate groups I have joined or the personality traits which make me puke. Everyone knows that purists are way too much into orthodoxy and this is a red line for me. Trying to unorthodox the living purism out of an orthodox was the biggest mistake of my life. It was totally worth it though.

Default mode

This refers to the first version of the DB code, where multiple checks and conversions were performed and non-static classes were extensively used. It is mostly stored in accessory.db, intermediate classes like accessory.db_queries and DB-type-specific ones (e.g., accessory.db_mysql).

private static ArrayList<HashMap<String, String>> adapt_string_outputs
(
    String source_, ArrayList<HashMap<String, String>> outputs_
)
{
    ArrayList<HashMap<String, String>> outputs = new ArrayList<HashMap<String, String>>();

    String source = db.check_source(source_);

    HashMap<String, db_field> fields = db.get_source_fields(source);
    if (!arrays.is_ok(fields) || !arrays.is_ok(outputs_)) return outputs;

    HashMap<String, String> col_fields = new HashMap<String, String>();
    boolean first_time = true;

    for (HashMap<String, String> item: outputs_)
    {
        HashMap<String, String> output = new HashMap<String, String>();

        for (Entry<String, String> item2: item.entrySet())
        {
            String col = item2.getKey();
            String field = null;

            if (first_time) 
            {
                field = col_to_field(source, col, fields);
                if (!strings.is_ok(field)) field = col;

                col_fields.put(col, field);
            }
            else field = col_fields.get(col);

            output.put(field, item2.getValue());
        }
            
        first_time = false;
            
        outputs.add(output);
    }

    return outputs;
}

permalink

Even though I haven't used most of these resources much since months ago, when the quick mode(s) came into picture, some parts where performance isn't an issue are still relevant. accessory.db.create_table and similar methods, for instance, don't have a quick equivalent. Additionally, all the modes share the same config-related code and adding further friendly shortcuts in the quick classes would be pointless. Hence, methods like accessory.db.get_col are still used everywhere, but there are similar quick options which will be discussed in the next section.

public static void create_table
(
    String source_, HashMap<String, db_field> fields_, boolean drop_it_
)
{
    String source = db.check_source_error(source_);
    if (!strings.is_ok(source)) return;

    boolean exists = table_exists_internal(source);
    if (exists)
    {
        if (drop_it_) drop_table_internal(source);
        else return;		
    }

    create_table_internal(source, get_cols(source_, fields_));
}
    
public static void __create_table_like
(
    String table_name_, String source_like_, boolean drop_it_
) 
{ 
    __lock();
        
    String source = db.check_source_error(source_like_);
        
    if (!strings.is_ok(source)) 
    {
        __unlock();
            
        return;
    }
        
    if (!strings.is_ok(table_name_))
    {
        db.manage_error(source, "Wrong table name");
            
        __unlock();
            
        return;
    }
        
    String cur_source = db._cur_source;
    db._cur_source = source;
        
    parent_db instance = db.get_valid_instance(source);
        
    create_table_like_internal(table_name_, source, instance, drop_it_); 
        
    db._cur_source = cur_source;
        
    __unlock();
}
    
public static void __backup_table(String source_, String backup_name_) 
{ 
    __lock();
        
    String source = db.check_source_error(source_);
        
    if (!strings.is_ok(source)) 
    {
        __unlock();
            
        return;
    }
        
    String table_source = db.get_table(source);
    String table_backup = 
    (
        strings.is_ok(backup_name_) ? backup_name_ : (table_source + "_backup")
    );
        
    String cur_source = db._cur_source;
    db._cur_source = source;
        
    parent_db instance = db.get_valid_instance(source);
        
    create_table_like_internal(table_backup, source, instance, true); 

    instance.backup_table(table_source, table_backup);
        
    db._cur_source = cur_source;
        
    __unlock();
}

public static void drop_table(String source_) { drop_table_internal(source_); }

public static void truncate_table(String source_) { truncate_table_internal(source_); }

permalink

public static String get_col(String source_, String field_) 
{ 
    return (String)config.get(get_db(source_), field_); 
}

permalink

accessory.db will remain as the main class for DB-related actions and, as such, will continue containing all the main methods or those dealing with essential aspects. accessory.db.sanitise_string or accessory.db.encrypt_credentials are good examples of this. In some cases, both versions exist: the original one in accessory.db and a quick equivalent accounting for its peculiarities (e.g., update_conn_info in accessory.db/accessory.db_quicker).

public static boolean encrypt_credentials
(
    String source_, String user_, String username_, String password_
)  
{ 
    String id = get_encryption_id(source_);
    if (!strings.is_ok(id)) return false;

    boolean stored_in_files = crypto.is_stored_in_files();
    if (!stored_in_files) crypto.store_in_files();
        
    boolean output = credentials.encrypt_username_password_file
    (
        id, user_, username_, password_
    );  

    if (!stored_in_files) crypto.store_in_db();
        
    return output;
}

permalink

Regardless of my current expectations, all the accessory.db methods are still working as usual and some people might even prefer to use them. Furthermore, ib still supports this first DB mode in parts not in use at the moment but certainly usable (e.g., ib.common_wrapper_old or ib.async_data_market). Not to mention that most of the DB-related methods in accessory_java and ib are still relying directly or indirectly on a is_quick parameter.

abstract class common_wrapper_old 
{
    public static void tick_price(int id_, int field_ib_, double price_) 
    { 
        if (async_data_market.is_ok()) async_data_market.tick_price(id_, field_ib_, price_);

        if (async_data_trades.is_ok()) async_data_trades.tick_price(id_, field_ib_, price_);
            
        if (async_data_watchlist.is_ok()) async_data_watchlist.tick_price(id_, field_ib_, price_);
    }
    
    public static void tick_size(int id_, int field_ib_, int size_) 
    { 
        if (async_data_market.is_ok()) async_data_market.tick_size(id_, field_ib_, size_);

        if (async_data_trades.is_ok()) async_data_trades.tick_size(id_, field_ib_, size_);

        if (async_data_watchlist.is_ok()) async_data_watchlist.tick_size(id_, field_ib_, size_);
    }
    
    public static void _tick_generic(int id_, int field_ib_, double value_) 
    {
        if (async_data_market.is_ok()) async_data_market.tick_generic(id_, field_ib_, value_);

        if (async_data_trades.is_ok()) async_data_trades.tick_generic(id_, field_ib_, value_);

        if (async_data_watchlist.is_ok()) async_data_watchlist.tick_generic(id_, field_ib_, value_);
    }
    
    public static void _tick_snapshot_end(int id_)
    {
        if (async_data_market.is_ok()) async_data_market.tick_snapshot_end(id_);
            
        if (async_data_trades.is_ok()) async_data_trades.tick_snapshot_end(id_);
            
        if (async_data_watchlist.is_ok()) async_data_watchlist.tick_snapshot_end(id_);
    }
}

permalink

public static long get_long(String source_, String field_col_, String where_, long wrong_) 
{ 
    return get_long(source_, field_col_, where_, wrong_, DEFAULT_IS_FIELD, is_quick(source_)); 
}
    
public static long get_long
(
    String source_, String field_col_, String where_, 
    long wrong_, boolean is_field_, boolean is_quick_
) 
{ 
    long output = 
    (
        is_quick_ ? db_quick.select_one_long
        (
            source_, get_field_quick_col
            (
                source_, field_col_, is_field_, true
            ), 
            where_, db.DEFAULT_ORDER
        ) 
        : db.select_one_long(source_, field_col_, where_, db.DEFAULT_ORDER)); 
    
    return (output != db.WRONG_LONG ? output : wrong_);
}
    
public static boolean get_boolean(String source_, String field_col_, String where_) 
{ 
    return get_boolean(source_, field_col_, where_, DEFAULT_IS_FIELD, is_quick(source_)); 
}
    
public static boolean get_boolean
(
    String source_, String field_col_, String where_, boolean is_field_, boolean is_quick_
) 
{ 
    return 
    (
        is_quick_ ? db_quick.select_one_boolean
        (
            source_, get_field_quick_col
            (
                source_, field_col_, is_field_, true
            ), 
            where_, db.DEFAULT_ORDER
        ) 
        : db.select_one_boolean(source_, field_col_, where_, db.DEFAULT_ORDER)
    ); 
}

public static ArrayList<Double> get_all_decimals
(
    String source_, String field_col_, String where_
) 
{ 
    return get_all_decimals(source_, field_col_, where_, DEFAULT_IS_FIELD, is_quick(source_)); 
}

public static ArrayList<Double> get_all_decimals
(
    String source_, String field_col_, String where_, boolean is_field_, boolean is_quick_
) 
{ 
    return 
    (
        is_quick_ ? db_quick.select_some_decimals
        (
            source_, db_common.get_field_quick_col
            (
                source_, field_col_, is_field_, true
            ), 
            where_, db.DEFAULT_MAX_ROWS, db.DEFAULT_ORDER
        ) 
        : db.select_some_decimals
        (
            source_, field_col_, where_, db.DEFAULT_MAX_ROWS, db.DEFAULT_ORDER
        )
    ); 
}

permalink

Quick mode

Initially, the "quick" label was exclusively associated with the second mode: an improvement over the original querying methods which could be enabled through the is_quick parameter. Since some time ago, the quick mode refers to both the quick/second and the quicker/third modes. The latter is assumed to be an evolved version which is used when possible. This eventuality happens every time, because the sole requirement of having a static implementation for the given DB type to be called from accessory.db_quicker is met for MySQL, the only supported type (i.e., accessory.db_quicker_mysql).

public static HashMap<String, String> select_one
(
    String type_, String source_, String[] cols_, 
    String where_cols_, String order_cols_
) 
{ 
    initialise();
        
    HashMap<String, String> output = new HashMap<String, String>();
        
    try
    {	
        ArrayList<HashMap<String, String>> temp = select
        (
            type_, source_, cols_, where_cols_, 1, order_cols_
        );	
        
        if (arrays.is_ok(temp)) output = temp.get(0);
    }
    catch (Exception e) 
    {
        output = new HashMap<String, String>();
            
        manage_error
        (
            type_, strings.to_string(source_), e, null, null, cols_,
            strings.to_string(where_cols_), strings.to_string(order_cols_), 
            db.WRONG_INT, null
        );
    }
        
    return output; 
}

public static ArrayList<String> select_some_strings
(
    String type_, String source_, String col_, 
    String where_cols_, int max_rows_, String order_cols_
) 
{ 
    initialise();
        
    ArrayList<String> output = new ArrayList<String>();
        
    try 
    { 
        ArrayList<HashMap<String, String>> items = select
        (
            type_, source_, new String[] { col_ }, 
            where_cols_, max_rows_, order_cols_
        ); 
        
        if (arrays.is_ok(items))
        {	
            for (HashMap<String, String> item: items) 
            { 
                output.add((String)get_val(col_, item, data.STRING)); 
            }
        }
    }
    catch (Exception e) 
    {
        output = new ArrayList<String>();
            
        manage_error
        (
            type_, strings.to_string(source_), e, strings.to_string(col_), 
            null, null, strings.to_string(where_cols_), 
            strings.to_string(order_cols_), max_rows_, null
        );
    }
        
    return output; 
}

permalink

By default, ib is completely focused on the quick mode. And, unless for a few exceptions where the given implementation is intrinsically linked to it (e.g., ib.async_data_quicker), this reliance is enabled via a modifiable flag which is defined somewhat indirectly. All the DB classes are located inside the db_ib package, also the secondary ones such as db_ib.common, used instrumentally by the other classes. The main DB classes, the ones containing most of the code used to interact with a specific source, have counterparts in the ib package. These are the ones including the public, friendlier interfaces and taking caring of all the actions not strictly related to DB interactions. For example, db_ib.apps, together with the secondary class db_ib.apps_cache, and ib.apps. accessory.db_common.is_quick allows to move to the first mode, meaning that all the calls to querying methods involving the given source will stop relying on the quick alternative from that moment on, and this can be easily done for db_ib.apps.SOURCE by calling ib.apps.is_quick. Despite the apparent absence of is_quick parameters, enabling/disabling the quick mode for most of the methods in the db_ib package isn't difficult. And this setup is also another good example of the usefulness of the overall support for sources in the DB classes.

public static boolean is_running() { return db_ib.apps.is_running(null); }
    
public static boolean is_stopped() { return !is_running(); }
    
public static boolean is_connected() { return db_ib.apps.is_connected(null); }
    
public static boolean update_is_connected(boolean is_connected_) 
{ 
    return db_ib.apps.update_is_connected(null, is_connected_); 
}
    
public static void update_time() { db_ib.apps.update_time(); }

permalink

public static boolean is_running(String app_) 
{ 
    return is_common(ib.apps.STATUS_RUNNING, app_); 
}
    
public static boolean is_stopped(String app_) 
{ 
    return is_common(ib.apps.STATUS_STOPPED, app_); 
}
    
public static boolean is_connected(String app_) 
{ 
    return db_common.exists
    (
        SOURCE, db_common.join_wheres
        (
            get_where_app(app_), get_where_connected(true)
        )
    ); 
}

permalink

public static void is_quick(String source_, boolean is_quick_) 
{ 
    if (!strings.is_ok(source_)) return;
        
    if (is_quick_)
    {
        if (!IS_QUICK.contains(source_)) IS_QUICK.add(source_);
    }
    else IS_QUICK.remove(source_);
}

permalink

One of the main characteristics of the quick mode(s) is the reduction of the number of conversions, mainly when dealing with fields/cols (i.e., exclusively using cols when interacting with these methods). And performing such an implementation, so simple in appearance, has been facing numerous challenges for different reasons. Namely: having to keep the overall coherence and support for the field/col reality (e.g., cols being changeable realisations of fields); supporting existing code exclusively relying on fields or on cols or on both; truly maximising the performance gains by ensuring that the required conversions happen in the least impactful way. The main goal of a big proportion of the multiple modifications which I have been performing to ease the management of the DB modes has actually been to address those problems. The current versions of db_common, db_quicker or db_quick in the accessory package are good proofs of this. More specifically, the cols are being stored in accessory.db_quick._cols at startup, what implies that methods like accessory.db_quick.get_col can potentially return invalid values (i.e., different than the ones delivered by accessory.db.get_col, which is directly accessing the config.get ground truth). This is the reason why the quick classes have their own methods to perform certain actions like change_table_name_queries or change_table_name_queries: their local versions need to be synchronised. I haven't applied this preloading approach to tables until recently. It just works for the quicker mode, but the second mode is virtually unused at the moment. Curiously, this improvement has proven to be more beneficial than what I was intuitively anticipating.

public static boolean change_table_name_queries(String source_, String name_) 
{ 
    return db_static.change_table_name_queries(source_, name_); 
}
    
public static boolean change_col_name_queries(String source_, String field_, String name_) 
{ 
    return db_static.change_col_name_queries(source_, field_, name_); 
}
    
public static String get_table(String source_) 
{ 
    int i = db.get_source_i(source_);
        
    return (i > arrays.WRONG_I ? _tables[i] : db.get_table(source_)); 
}
    
public static void update_table(String source_, String table_) 
{ 
    populate_table(source_, table_, arrays.WRONG_I); 
}

permalink

public static boolean change_table_name_queries(String source_, String name_)
{	
    boolean output = db.change_table_name_queries(source_, name_); 

    if (output && is_quicker(source_)) db_quicker.update_table(source_, name_);
        
    return output;
}
    
public static boolean change_col_name_queries(String source_, String field_, String name_)
{	
    boolean output = db.change_col_name_queries(source_, field_, name_); 

    if (output) db_quick.populate_col(source_, field_);
        
    return output;
}
    
public static String get_table(String source_) 
{ 
    return 
    (
        is_quicker(source_) ? 
        db_quicker.get_table(source_) : 
        db.get_table(source_)
    ); 
}
    
public static String get_col(String source_, String field_) 
{ 
    int i = db.get_field_i(source_, field_);
        
    return (i > arrays.WRONG_I ? _cols[i] : db.get_col(source_, field_)); 
}

public static String[] get_cols(String source_) 
{ 
    String[] output = null;
        
    int[] min_max = db.get_source_min_max(source_);
    if (min_max == null) return output;
        
    output = new String[db.get_source_fields_tot(min_max)];

    int i2 = -1;

    for (int i = min_max[0]; i <= min_max[1]; i++) 
    {
        i2++;
            
        output[i2] = _cols[i];
    }
        
    return output;
}

public static String[] get_cols(String source_, String[] fields_) 
{ 
    int tot = arrays.get_size(fields_);
    if (tot == 0) return null;
                
    String[] cols = new String[tot];
        
    for (int i = 0; i < tot; i++) 
    { 
        String col = get_col(source_, fields_[i]);
        if (!strings.is_ok(col)) continue;
            
        cols[i] = col;
    }
        
    return cols;
}
    
public static void update_cols(String source_) 
{ 
    populate_cols(source_, db.get_source_fields(source_)); 
}

permalink

Other relevant issues

There are other aspects and implementations, mostly related to the quick mode, which are worth analysing.

Global arrays

The DB approach in accessory_java has always been associated with the global storage/retrieval of non-trivial amounts of data. Their exact setup and the way in which these resources are being managed have probably had the highest impact on performance and memory usage, not just for the DB part but for these libraries in general. The most recent modification is a descriptive sample: replacing the global array storing the accessory.db_field instances for the fields in all the tables with a String array. That is, setting up an efficient storage for low-priority resources by maximising the new accessory.db_field's functionalities serialise/unserialise.

public String serialise() 
{ 
    HashMap<String, Object> items = new HashMap<String, Object>();
        
    items.put(SERIALISATION_LABEL_TYPE, data.check_type(_type));
    items.put(SERIALISATION_LABEL_SIZE, _size);
    items.put(SERIALISATION_LABEL_DECIMALS, _decimals);
    items.put
    (
        SERIALISATION_LABEL_DEFAULT, 
        (
            !generic.are_equal(_default, WRONG_DEFAULT) ? _default : null
        )
    );
    items.put
    (
        SERIALISATION_LABEL_FURTHER, 
        (
            _further != null ? serialisation.get_string_array(_further) : null
        )
    );
        
    return serialisation.serialise(null, items);
}

permalink

public static db_field unserialise(String serialised_)
{
    db_field output = null;
        
    HashMap<String, String> vals = serialisation.get_unserialised_item_values
    (
        serialised_, db_field.get_all_serialisation_labels()
    );

    if (arrays.is_ok(vals))
    {
        output = new db_field
        (
            vals.get(db_field.SERIALISATION_LABEL_TYPE), (long)serialisation.parse_value
            (
                vals.get(db_field.SERIALISATION_LABEL_SIZE), long.class
            ), 
            (int)serialisation.parse_value
            (
                vals.get(db_field.SERIALISATION_LABEL_DECIMALS), int.class
            ), 
            serialisation.parse_value
            (
                vals.get(db_field.SERIALISATION_LABEL_DEFAULT)
            ), 
            (String[])serialisation.parse_array
            (
                vals.get(db_field.SERIALISATION_LABEL_FURTHER), String[].class
            )
        );			
    }
    
    return output;
}

permalink

During the last months, I have been performing multiple modifications in how the main DB information is stored and retrieved. Therefore, I have been improving the corresponding global arrays by relying on more efficient collection types and by reducing their dimensions. As far as I have been updating the public accessory_java repository quite often, getting a good grasp of this evolution is easy: look at previous commits by focusing on the way in which the global collections in the main DB classes (e.g., accessory.db, accessory.db_common or accessory.db_quick) have been changing. Without forgetting that improvements in such important parts of the code are likely to continue occurring.

static String[] SETUP_IDS = new String[0];
static String[] SOURCES = new String[0];
static String[] FIELDS = new String[0];
    
static String INSTANCE = null;

static String _cur_source = strings.DEFAULT;
static String _cur_type = DEFAULT_TYPE;
static String _last_query = null;
    
private static int[] SOURCES0 = new int[0];
private static String[] FIELDS_INFO = new String[0];
private static Object[][] SETUPS = new Object[0][0];
private static boolean[] DEFAULTS = new boolean[0];
    
private static int FIELDS_TOT = -1;
    
private static String[] _credentials = null;

permalink

Queries

The different types of querying methods (e.g., the more generic ones in accessory.db or accessory.db_quick vs. the more specific ones in accessory.db_common) are more or less self-explanatory. Either way, there are some underlying issues which need further discussion. The fact of not relying on parameterised queries, for example, might strike some people as non-ideal, even plainly wrong. The original reason for focusing on simple string concatenation was to have the most adaptable format at a time when I wasn't even sure about what the final result will be. Evidently, the most basic safety measures were put in place since the very beginning. This idea should be clear just by bearing in mind that safety-focused friendliness (i.e., being able to safely throw virtually anything at the code) was a major concern when building the first, default DB mode, which happened during the first stages of the development.

static String get_query
(
    String source_, String type_, String[] cols_, HashMap<String, String> vals_, 
    String where_, int max_rows_, String order_, HashMap<String, db_field> cols_info_, 
    boolean is_quicker_
)
{	
    String query = strings.DEFAULT;
    if 
    (
        !is_quicker_ && !db_sql.params_are_ok
        (
            source_, type_, cols_, vals_, where_, 
            max_rows_, order_, cols_info_, is_quicker_
        )
    ) 
    { return query; }

    boolean is_ok = false;
        
    String table = (is_quicker_ ? db_quicker.get_table(source_) : db.get_table(source_));

    if (type_.equals(db.QUERY_SELECT))
    {
        query = "SELECT ";
        query += (arrays.is_ok(cols_) ? get_query_cols(cols_) : "*");     	
        query += " FROM " + get_variable_static(table);

        if (strings.is_ok(where_)) query += " WHERE " + where_;
        if (strings.is_ok(order_)) query += " ORDER BY " + order_;
        if (max_rows_ > 0) query += " LIMIT " + max_rows_;	

        is_ok = true;
    }
    else if (type_.equals(db.QUERY_INSERT))
    {
        query = "INSERT INTO " + get_variable_static(table); 
        String temp = get_query_cols(vals_, _keys.KEY);

        if (strings.is_ok(temp)) 
        {
            query += "(" + temp + ")";

            temp = get_query_cols(vals_, _keys.VALUE);
            if (strings.is_ok(temp)) 
            {
                query += " VALUES (" + temp + ")"; 
                is_ok = true;
            }      		
        } 
    }
    else if (type_.equals(db.QUERY_UPDATE))
    {
        query = "UPDATE " + get_variable_static(table); 

        String temp = get_query_cols(vals_, _keys.FURTHER);
        if (strings.is_ok(temp)) 
        {
            query += " SET " + temp;
            is_ok = true;     		
        }

        if (strings.is_ok(where_)) query += " WHERE " + where_;
    }
    else if (type_.equals(db.QUERY_DELETE))
    {
        query = "DELETE FROM " + get_variable_static(table);
        query += " WHERE " + where_;

        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_EXISTS))
    {
        query = get_query_table_exists(table);			
        
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_CREATE))
    {
        query = get_query_create_table(table, cols_info_);
            
        is_ok = strings.is_ok(query);
    }
    else if (type_.equals(db.QUERY_TABLE_DROP))
    {
        query = get_query_drop_table(table);	
            
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_TRUNCATE))
    {
        query = "TRUNCATE TABLE " + get_variable_static(table);	
            
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_SELECT_COUNT))
    {
        query = 
        (
            "SELECT " + get_select_count_col_static() + 
            " FROM " + get_variable_static(table)
        );			
        if (strings.is_ok(where_)) query += " WHERE " + where_;
            
        is_ok = true;
    }

    if (!is_ok)
    {
        db.manage_error(source_, db.ERROR_QUERY, null, null, null, is_quicker_);

        query = strings.DEFAULT;
    }

    return query;
}

permalink

As already commented in a previous article, I tend to rely on more "expensive" collections, mainly locally and at the start of an implementation. Later and if proven required, performing all the necessary improvements is also quite natural to me, like what I have been doing with the main DB global arrays. Due to its importance and widespread usage, performing changes of this sort in the code dealing with queries would be associated with a much higher effort and number of side effects. That is why the SELECT queries return instances of a type (i.e., ArrayList<HashMap<String, String>>) which, in some cases, can be too memory-consuming in comparison with other alternatives such as HashMap<String, String>[] or String[][]. There is a specific scenario where this issue can become slightly problematic and, for that reason, I might have to come up with a better option. In any case, it would focus on very specific conditions and would allow the current version to continue working as usual. Other array types like the HashMap<String, String> parameter in insert/update queries, although potentially improvable, can be assumed to be good enough almost regardless of the conditions.

private static ArrayList<HashMap<String, String>> execute_query
(
    String source_, String query_, boolean return_data_, 
    String[] cols_, Connection conn_, boolean is_static_
)
{
    db.update_is_ok(source_, false, is_static_);

    db._last_query = query_;
        
    ArrayList<HashMap<String, String>> output = new ArrayList<HashMap<String, String>>();
    if (conn_ == null) return output;
        
    try 
    {
        PreparedStatement statement = conn_.prepareStatement(query_);

        if (!return_data_) 
        {
            statement.executeUpdate();

            db.update_is_ok(source_, true, is_static_);

            return output;
        }

        try
        {
            ResultSet data = statement.executeQuery();

            String[] cols = execute_query_get_cols(source_, data, cols_, is_static_);
            if (!arrays.is_ok(cols)) return output;
                
            while (data.next()) 
            {
                HashMap<String, String> row = new HashMap<String, String>();

                for (String col : cols)
                {
                    String val = 
                    (
                        (strings.is_ok(col) && data.findColumn(col) > -1) ? 
                        data.getString(col) : strings.DEFAULT
                    );
                    if (!strings.is_ok(val)) val = strings.DEFAULT;

                        row.put(col, val);
                    }

                    output.add(row);
                }	

                db.update_is_ok(source_, true, is_static_);
            }
        catch (Exception e) 
        { 
            db.manage_error(source_, db.ERROR_QUERY, query_, e, null, is_static_); 
        }
    } 
    catch (Exception e) 
    { 
        db.manage_error(source_, db.ERROR_QUERY, query_, e, null, is_static_); 
    } 
    finally { disconnect(source_, conn_, is_static_); }

    return output;
}

permalink

DB Cache

The last outcome of my optimisation efforts on the DB code has been a new approach which could easily become the fourth mode: DB cache. Its main code is currently stored in accessory.db_cache and accessory.db_cache_mysql, and also has an important reliance on some accessory.strings parsing methods which I have recently created. The premise is simple: having the plainest interaction (i.e., executing the final query right away) by still being able to take advantage of the accessory_java's comprehensive support. It is yet a bit too soon, but its essential structure seems already well-defined; it consists of three main parts: adding placeholders to the query for cases where some values need to be updated through one of the db_cache.add_placeholders methods; storing the given query and other basic information via db_cache.add; and executing the query by calling db_cache.execute or db_cache.execute_simple.

static String add_placeholders
(
    String source_, String query_, String col_, int id_, 
    String quote_value_, String quote_variable_
)
{
    if 
    (
        id_ <= WRONG_ID || !db.source_is_ok(source_) || 
        !strings.are_ok(new String[] { query_, col_ })
    ) 
    { return strings.DEFAULT; }
        
    String output = query_;
        
    String placeholder = get_placeholder(id_);
    
    int start_i = get_table_i(source_, query_, quote_value_, quote_variable_);
    if (start_i <= strings.WRONG_I) return output;
        
    while (true)
    {
        int i = get_i_end_variable
        (
            quote_variable_ + col_ + quote_variable_, output, start_i, quote_value_
        );			
        if (i <= strings.WRONG_I) break;
            
        int[] is = get_is(output, i + 1, quote_value_);
        if (is == null) break;
            
        String[] before_after = strings.get_before_after(output, is);
            
        output = before_after[strings.BEFORE_I] + placeholder + before_after[strings.AFTER_I];
        
        start_i = i;
    }
        
    return output;
}

permalink

public static int[] get_is
(
    String[] needles_, String haystack_, boolean normalise_, String start_, 
    String end_, int start_i_, boolean find_include_end_
) 
{
    int[] output_wrong = null;
    if (!is_ok(haystack_) || !arrays.is_ok(needles_)) return output_wrong;
        
    int[] output = new int[] { WRONG_I, WRONG_I };
        
    int start_i = (start_i_ > WRONG_I ? start_i_ : 0);
    int max_i = needles_.length - 1;
        
    int last_i = haystack_.length() - 1;
    int end_i = last_i;
        
    int i2 = -1;
        
    for (int i = 0; i <= max_i; i++)
    {
        if (i == 0 && needles_[i] == null) continue;
            
        int temp = get_i(needles_[i], haystack_, normalise_, start_, end_, start_i);
            
        if (i > 0 && i < max_i) 
        {
            if (temp > WRONG_I && temp < end_i) end_i = temp;
                
            continue;
        }
        else if (temp <= WRONG_I && (i == 0 || find_include_end_)) return output_wrong;
            
        i2++;
        output[i2] = temp;
            
        start_i = temp + 1;
    }
        
    if (needles_[0] == null) output[END_I] = end_i;
    else if (find_include_end_ && output[END_I] <= WRONG_I) return output_wrong;
        
    if (!find_include_end_ && output[END_I] > WRONG_I) output[END_I]--;
        
    if (output[START_I] <= WRONG_I) 
    {
        if (output[END_I] <= WRONG_I) return output_wrong;
        else output[START_I] = (start_i_ > 0 ? start_i_ : 0);
    }
    else if (output[END_I] <= WRONG_I) output[END_I] = last_i;
        
    return output;
}
    
public static int get_i
(
    String needle_, String haystack_, boolean normalise_, 
    String start_, String end_, int start_i_
) 
{ return index_of_outside(needle_, haystack_, normalise_, start_, end_, start_i_); }
    
public static int index_of_outside
(
    String needle_, String haystack_, boolean normalise_, String start_, String end_
) 
{ return index_of_outside(needle_, haystack_, normalise_, start_, end_, 0); }
    
public static int index_of_outside
(
    String needle_, String haystack_, boolean normalise_, 
    String start_, String end_, int start_i_
)
{
    if 
    (
        !contains(start_, haystack_, normalise_) || 
        !contains(end_, haystack_, normalise_)
    ) 
    { return index_of(needle_, haystack_, start_i_, normalise_); }

    int output = WRONG_I;

    int start_i = (start_i_ > 0 ? start_i_ : 0);
        
    while (true)
    {
        output = index_of(needle_, haystack_, start_i, normalise_);
        if (output < 0) return output;

        int count = 1;
            
        while (count <= 2)
        {	
            String needle = (count == 1 ? start_ : end_);
                
            int i = WRONG_I;
                
            while (true)
            {
                i = index_of(needle, haystack_, start_i, normalise_);
                if (i < 0 || (count == 1 && i >= output)) return output;
                    
                if (count == 2 && i <= get_end_i(output, needle_.length())) 
                {
                    count = 0;
                        
                    break;
                }

                if 
                (
                    i > 0 && substring(haystack_, i - 1, 1).equals("\\") &&
                    (i == 1 || !substring(haystack_, i - 2, 1).equals("\\"))
                ) 
                { 
                    start_i++; 
                }
                else break;
            }
                                
            start_i = i + 1;
            count++;
        }			
    }
}

permalink

Despite still being only marginally used in ib, it has already proven its usefulness. Although relying on this approach does speed the query execution up, the improvement isn't really too important. Memory- and speed-wise, some of the last updates of the DB code have had a more important impact. To not mention that those improvements can only be enjoyed on a query basis and are associated with a somewhat relevant amount of code, mainly now when most of the supporting code has yet to be written. But I see two good news here. Firstly, faster queries are possible and most of what is required to make this happen is already in place. Secondly, the improvement not being too important confirms that the current implementation, mainly the quicker mode, is already quite optimal, because stripping more than what DB cache does is almost impossible.

class async_data_cache_quicker 
{
    public static String[] COLS = null;
    
    public static int EXISTS = db_cache.WRONG_ID;
    
    public static boolean exists(String symbol_) 
    { 
        return db_cache_mysql.exists_simple
        (
            EXISTS, cache.add_changing_val
            (
                get_col_id(async_data_quicker.COL_SYMBOL
            ), 
            symbol_)
        ); 
    }
    
    public static void populate(String source_) { add_exists(source_); }
    
    private static void add_exists(String source_)
    {
        String col = async_data_quicker.COL_SYMBOL;
        
        String where = cache.get_variable(source_, col) + "=" + cache.get_value(source_);
        
        String query = cache.get_query_exists(source_, where);
        
        EXISTS = cache.add_query(source_, query, col, get_col_id(col), true);
    }
    
    private static int get_col_id(String col_) { return arrays_quick.get_i(COLS, col_); }
}

permalink

Alternative past

The peculiar evolution of the development of these libraries has provoked multiple outcomes which could be considered non-ideal, although they are certainly worthy and almost a requirement for my current insights. Either way, if I were starting right now, I would probably do some parts differently. And this is precisely the point of the present section: describing the main changes which would occur in that alternative reality.

Default mode

The first, default DB mode would almost certainly be different. In principle, relying on a fully static approach, something similar to the quicker mode, even closer to the new DB cache, could be seen as the best option. But this would provoke lots of repeated code, at least in case of wanting to keep support for other DB types, and this might convert a mixed option into more advisable. In a minimal version, a completely static implementation would make more sense; also further replacements such as accessory.arrays_quick instead of accessory.arrays. In fact, I will probably do something on these lines in case of developing libraries similar to accessory_java for other languages (e.g., accessory_[language]_min).

public static boolean[] add(boolean[] array_, int i_, boolean value_)
{
    if (i_ >= array_.length) array_ = redim(array_, i_ + 1);
        
    array_[i_] = value_;
        
    return array_;
}
    
public static String[] redim(String[] array_) { return redim(array_, array_.length + 1); }
    
public static String[] redim(String[] array_, int new_size_) 
{ 
    return Arrays.copyOf(array_, new_size_); 
}
    
public static int[] redim(int[] array_) { return redim(array_, array_.length + 1); }

permalink

Types/keys (values?!)

There are good reasons for this duality and the overall support for sources is very nice. On the other hand, the whole structure is a bit too complex and, particularly, fields/cols have created a lot of hassle. Sources/tables would certainly remain, but the fields/cols scenario is more doubtful. Setups might also be different. The DB types would be simpler and not part of CONFIG.

public static String get_col(String source_, String field_) 
{ 
    int i = db.get_field_i(source_, field_);
        
    return (i > arrays.WRONG_I ? _cols[i] : db.get_col(source_, field_)); 
}

public static String[] get_cols(String source_) 
{ 
    String[] output = null;
        
    int[] min_max = db.get_source_min_max(source_);
    if (min_max == null) return output;
        
    output = new String[db.get_source_fields_tot(min_max)];

    int i2 = -1;

    for (int i = min_max[0]; i <= min_max[1]; i++) 
    {
        i2++;
            
        output[i2] = _cols[i];
    }
        
    return output;
}

public static String[] get_cols(String source_, String[] fields_) 
{ 
    int tot = arrays.get_size(fields_);
    if (tot == 0) return null;
                
    String[] cols = new String[tot];
        
    for (int i = 0; i < tot; i++) 
    { 
        String col = get_col(source_, fields_[i]);
        if (!strings.is_ok(col)) continue;
            
        cols[i] = col;
    }
        
    return cols;
}

permalink

Non-static classes

accessory.data would be removed. accessory.db_where and accessory.db_order wouldn't be there either or, at least, their structure would be much simpler. accessory.db_field would surely not be removed, although its exact implementation isn't too clear. In principle, it could remain more or less as it is right now, where its major drawback of inefficient storage is minimised via serialisation.

static String[] SETUP_IDS = new String[0];
static String[] SOURCES = new String[0];
static String[] FIELDS = new String[0];
    
static String INSTANCE = null;

static String _cur_source = strings.DEFAULT;
static String _cur_type = DEFAULT_TYPE;
static String _last_query = null;
    
private static int[] SOURCES0 = new int[0];
private static db_field[] FIELDS0 = new db_field[0];
private static Object[][] SETUPS = new Object[0][0];
private static boolean[] DEFAULTS = new boolean[0];

permalink

Queries

The main querying structure is likely to remain as it is now, with few minor variations. For example, the duplicity of methods between the generic versions in the main classes and the more to-the-point options in accessory.db_common would probably disappear. The overloads with non-static parameters (e.g., accessory.db_exists(String source_, db_where[] wheres_)), initially wrongly assumed to be more useful than the string alternatives, would be removed too. The main returned type would surely change, most likely to HashMap<String, String>[]. I would probably keep the execution of unparameterised queries because of not really seeing a problem with it. Actually, it was proven to be quite helpful while performing certain implementations (e.g., DB cache).

@SuppressWarnings("unchecked")
public static ArrayList<Boolean> select_some_booleans
(
    String source_, String field_, String where_cols_, 
    int max_rows_, String order_cols_
) 
{ 
    return (ArrayList<Boolean>)db_queries.select_some_common
    (
        source_, field_, where_cols_, max_rows_, order_cols_, data.BOOLEAN, false
    );
}

@SuppressWarnings("unchecked")
public static ArrayList<Boolean> select_some_booleans_quick
(
    String source_, String col_, String where_cols_, int max_rows_, String order_cols_
)
{ 
    return (ArrayList<Boolean>)db_queries.select_some_common
    (
        source_, col_, where_cols_, max_rows_, order_cols_, data.BOOLEAN, true
    ); 
}

public static ArrayList<HashMap<String, String>> select
(
    String source_, db_where[] wheres_
) 
{ return select(source_, DEFAULT_FIELDS_COLS, wheres_, DEFAULT_MAX_ROWS, null); }

permalink

static String get_query
(
    String source_, String type_, String[] cols_, HashMap<String, String> vals_, 
    String where_, int max_rows_, String order_, HashMap<String, db_field> cols_info_, 
    boolean perform_checks_, boolean is_static_
)
{	
    String query = strings.DEFAULT;
    
    if 
    (
        perform_checks_ && !db_sql.params_are_ok
        (
            source_, type_, cols_, vals_, where_, 
            max_rows_, order_, cols_info_, is_static_
        )
    ) 
    { return query; }

    boolean is_ok = false;
    String table = db.get_table(source_);

    if (type_.equals(db.QUERY_SELECT))
    {
        query = "SELECT ";
        query += (arrays.is_ok(cols_) ? get_query_cols(cols_) : "*");     	
        query += " FROM " + get_variable_static(table);

        if (strings.is_ok(where_)) query += " WHERE " + where_;
        if (strings.is_ok(order_)) query += " ORDER BY " + order_;
        if (max_rows_ > 0) query += " LIMIT " + max_rows_;	

        is_ok = true;
    }
    else if (type_.equals(db.QUERY_INSERT))
    {
        query = "INSERT INTO " + get_variable_static(table); 
        String temp = get_query_cols(vals_, _keys.KEY);

        if (strings.is_ok(temp)) 
        {
            query += "(" + temp + ")";

            temp = get_query_cols(vals_, _keys.VALUE);
            if (strings.is_ok(temp)) 
            {
                query += " VALUES (" + temp + ")"; 
                is_ok = true;
            }      		
        } 
    }
    else if (type_.equals(db.QUERY_UPDATE))
    {
        query = "UPDATE " + get_variable_static(table); 

        String temp = get_query_cols(vals_, _keys.FURTHER);
        if (strings.is_ok(temp)) 
        {
            query += " SET " + temp;
            is_ok = true;     		
        }

        if (strings.is_ok(where_)) query += " WHERE " + where_;
    }
    else if (type_.equals(db.QUERY_DELETE))
    {
        query = "DELETE FROM " + get_variable_static(table);
        query += " WHERE " + where_;

        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_EXISTS))
    {
        query = get_query_table_exists(table);			
    
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_CREATE))
    {
        query = get_query_create_table(table, cols_info_);
        
        is_ok = strings.is_ok(query);
    }
    else if (type_.equals(db.QUERY_TABLE_DROP))
    {
        query = get_query_drop_table(table);	
        
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_TABLE_TRUNCATE))
    {
        query = "TRUNCATE TABLE " + get_variable_static(table);	
        
        is_ok = true;
    }
    else if (type_.equals(db.QUERY_SELECT_COUNT))
    {
        query = 
        (
            "SELECT " + get_select_count_col_static() + 
            " FROM " + get_variable_static(table)
        );			
        if (strings.is_ok(where_)) query += " WHERE " + where_;
            
        is_ok = true;
    }

    if (!is_ok)
    {
        db.manage_error(source_, db.ERROR_QUERY, null, null, null, is_static_);

        query = strings.DEFAULT;
    }

    return query;
}

permalink

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment