Skip to content

Instantly share code, notes, and snippets.

@andineck
Last active January 12, 2016 06:49
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andineck/6a4994cbd92dc4464b55 to your computer and use it in GitHub Desktop.
Save andineck/6a4994cbd92dc4464b55 to your computer and use it in GitHub Desktop.
spring notes, mongodb

http://docs.spring.io/spring-data/mongodb/docs/1.5.4.RELEASE/reference/html/mapping-chapter.html

6.3.1 Mapping annotation overview

The MappingMongoConverter can use metadata to drive the mapping of objects to documents. An overview of the annotations is provided below

  • @Id - applied at the field level to mark the field used for identiy purpose.
  • @Document - applied at the class level to indicate this class is a candidate for mapping to the database. You can specify the name of the collection where the database will be stored. @Document(collection = "collectionName")
  • @DBRef - applied at the field to indicate it is to be stored using a com.mongodb.DBRef.
  • @Indexed - applied at the field level to describe how to index the field.
  • @CompoundIndex - applied at the type level to declare Compound Indexes
  • @GeoSpatialIndexed - applied at the field level to describe how to geoindex the field.
  • @Transient - by default all private fields are mapped to the document, this annotation excludes the field where it is applied from being stored in the database
  • @PersistenceConstructor - marks a given constructor - even a package protected one - to use when instantiating the object from the database. Constructor arguments are mapped by name to the key values in the retrieved DBObject.
  • @Value - this annotation is part of the Spring Framework . Within the mapping framework it can be applied to constructor arguments. This lets you use a Spring Expression Language statement to transform a key's value retrieved in the database before it is used to construct a domain object. In order to reference a property of a given document one has to use expressions like: @Value("#root.myProperty") where root refers to the root of the given document.
  • @Field - applied at the field level and described the name of the field as it will be represented in the MongoDB BSON document thus allowing the name to be different than the fieldname of the class.

http://docs.spring.io/spring-data/data-mongo/docs/1.8.1.RELEASE/reference/html/#mapping-usage-annotations

The MappingMongoConverter can use metadata to drive the mapping of objects to documents. An overview of the annotations is provided below

@Id - applied at the field level to mark the field used for identiy purpose.

@Document - applied at the class level to indicate this class is a candidate for mapping to the database. You can specify the name of the collection where the database will be stored.

@DBRef - applied at the field to indicate it is to be stored using a com.mongodb.DBRef.

@Indexed - applied at the field level to describe how to index the field.

@CompoundIndex - applied at the type level to declare Compound Indexes

@GeoSpatialIndexed - applied at the field level to describe how to geoindex the field.

@TextIndexed - applied at the field level to mark the field to be included in the text index.

@Language - applied at the field level to set the language override property for text index.

@Transient - by default all private fields are mapped to the document, this annotation excludes the field where it is applied from being stored in the database

@PersistenceConstructor - marks a given constructor - even a package protected one - to use when instantiating the object from the database. Constructor arguments are mapped by name to the key values in the retrieved DBObject.

@Value - this annotation is part of the Spring Framework . Within the mapping framework it can be applied to constructor arguments. This lets you use a Spring Expression Language statement to transform a key’s value retrieved in the database before it is used to construct a domain object. In order to reference a property of a given document one has to use expressions like: @Value("#root.myProperty") where root refers to the root of the given document.

@Field - applied at the field level and described the name of the field as it will be represented in the MongoDB BSON document thus allowing the name to be different than the fieldname of the class.

@Version - applied at field level is used for optimistic locking and checked for modification on save operations. The initial value is zero which is bumped automatically on every update.

@andineck
Copy link
Author

@andineck
Copy link
Author

andineck commented Dec 1, 2015

@andineck
Copy link
Author

andineck commented Dec 1, 2015

@andineck
Copy link
Author

andineck commented Dec 1, 2015

package com.intesso.model.util;

import java.io.File;
import java.io.IOException;
import java.net.UnknownHostException;
import java.util.Set;

import org.slf4j.Logger;

import com.intesso.guice.InjectLogger;
import com.intesso.guice.annotation.MongoDatabase;
import com.intesso.guice.annotation.MongoExportCmd;
import com.intesso.guice.annotation.MongoExportPath;
import com.intesso.guice.annotation.MongoHost;
import com.intesso.guice.annotation.MongoImportCmd;
import com.intesso.guice.annotation.MongoImportPath;
import com.intesso.guice.annotation.MongoPassword;
import com.intesso.guice.annotation.MongoPort;
import com.intesso.guice.annotation.MongoUser;
import com.google.code.morphia.Datastore;
import com.google.code.morphia.Morphia;
import com.google.inject.Inject;
import com.mongodb.DB;
import com.mongodb.Mongo;
import com.mongodb.MongoException;

public class MongoDatastore implements IMongoDatastore {

    private @InjectLogger Logger logger;

    private Datastore datastore;
    private Mongo mongo;
    private Morphia morphia;
    private @Inject @MongoExportCmd String exportCmd;
    private @Inject @MongoExportPath String exportPath;
    private @Inject @MongoImportCmd String importCmd;
    private @Inject @MongoImportPath String importPath;
    private final String host;
    private final Integer port;
    private final String database;
    private final String user;
    private final String password;

    @Inject public MongoDatastore(@MongoHost String host, @MongoPort Integer port, @MongoDatabase String database, @MongoUser String user,
            @MongoPassword String password) {

        this.host = host;
        this.port = port;
        this.database = database;
        this.user = user;
        this.password = password;

        connect();
    }

    public void connect() {
        try {
            mongo = new Mongo(host, port);
        } catch (UnknownHostException e) {
            logger.error("Mongo host reachable address: {}:{}, database:  {}, user {} , password {}", new Object[] { host, port, database, user, password });
            throw new MongoException("unknown host...", e);
        }

        DB db = mongo.getDB(database);
        if (!db.isAuthenticated()) {
            db.addUser(user, password.toCharArray());
        }

        morphia = new Morphia();
        datastore = morphia.createDatastore(mongo, database, user, password.toCharArray());
        datastore.ensureCaps();
        datastore.ensureIndexes();

    }

    /*
     * (non-Javadoc)
     *
     * @see com.intesso.model.util.IMongoDatastore#getMorphia()
     */
    public Morphia getMorphia() {
        return morphia;
    }

    /*
     * (non-Javadoc)
     *
     * @see com.intesso.model.util.IMongoDatastore#getDatastore()
     */
    public Datastore getDatastore() {
        return datastore;
    }

    /*
     * (non-Javadoc)
     *
     * @see com.intesso.model.util.IMongoDatastore#exportMongoDb()
     */
    public void exportMongoDb() {
        String d = datastore.getDB().getName();
        logger.debug("start export for mongo database: {}", d);

        Runtime runtimeObject = Runtime.getRuntime();
        Set<String> collections = datastore.getDB().getCollectionNames();

        for (String c : collections) {
            try {
                String o = exportPath + c + ".json";
                runtimeObject.exec(exportCmd + " -d " + d + " -c " + c + " -o " + o);
                logger.debug("  mongoexport for collection: {}", o);
            } catch (IOException e) {
                logger.error("mongoexport terminated with errors. database: $, collection: $, error: $ ", new Object[] { d, c, e.getMessage() });
                e.printStackTrace();
            }
        }
        logger.debug("completed export for mongo database: {}", d);
    }

    /*
     * (non-Javadoc)
     *
     * @see com.intesso.model.util.IMongoDatastore#importMongoDb()
     */
    public void importMongoDb() {
        String d = datastore.getDB().getName();
        logger.debug("start export for mongo database: {}", d);

        datastore.getDB().dropDatabase();
        connect();

        Runtime runtimeObject = Runtime.getRuntime();
        File importDirectory = new File(importPath);

        for (File file : importDirectory.listFiles()) {
            String c = file.getName().replace(".json", "");
            try {
                String command = importCmd + " -d " + d + " -c " + c + " --file " + importPath + file.getName() + " --upsert";
                runtimeObject.exec(command);
                logger.debug("  mongoimport:  {}", command);

            } catch (IOException e) {
                logger.error("mongoimport terminated with errors. database: $, collection: $, error: $ ", new Object[] { d, c, e.getMessage() });
            }
        }
        logger.debug("completed export for mongo database: {}", d);
    }

}

@andineck
Copy link
Author

MongoDB Limitations:

Locks

http://blog.mongodirector.com/atomicity-isolation-concurrency-in-mongodb/

MongoDB uses locks to prevent multiple clients from updating the same piece of data at the same time. MongoDB 2.2+ uses “database” level locks.So when one write operation locks the database all other write operations to the same database (even if they are to a separate collection) are blocked waiting on the lock. MongoDB uses “writer greedy” locks – it favors writes over reads. In 2.2+ certain long running operations can yield their locks.

https://docs.mongodb.org/manual/faq/concurrency/

Write and Read Consistency (or not so much of it)

https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads

  • So: if you use MongoDB, you should almost always be using the Majority write concern. Anything less is asking for data corruption or loss when a primary transition occurs.
  • In MongoDB, reads from a primary have strict consistency; reads from secondary members have eventual consistency.
  • modes other than primary can and will return stale data because the secondary queries will not include the most recent write operations to the replica set’s primary.
  • All read preference modes except primary may return stale data because secondaries replicate operations from the primary with some delay. Ensure that your application can tolerate stale data if you choose to use a non-primary mode.
  • Although the docs say “For all inserts and updates, MongoDB modifies each document in isolation: clients never see documents in intermediate states,” they also warn:
  • MongoDB allows clients to read documents inserted or modified before it commits these modifications to disk, regardless of write concern level or journaling configuration…
  • For systems with multiple concurrent readers and writers, MongoDB will allow clients to read the results of a write operation before the write operation returns.

Changed in version 3.2: MongoDB 3.2 introduces readConcern option. Clients using majority readConcern cannot see the results of writes before they are made durable.

@andineck
Copy link
Author

MongoDB Config in Spring-Boot:

application.yml

spring:
  data:
    mongodb:
      host: localhost
      port: 27017
      database: test
  profiles:
    active: test

https://bitbucket.org/mpatki01/spring-boot-todo/commits/77ff273a5376

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment