-
Step 1: Download Tomcat from here: http://tomcat.apache.org/download-70.cgi
-
Step 2: Right click and extract the file.
-
Step 3: Copy (or better Cut) the extracted folder (e.g. apache-tomcat-7.0.53) and paste it to ~/Tools/Tomcat
-
Step 4: Open Terminal and cd ~/Tools/Tomcat/apache-tomcat-7.0.53/conf
-
Step 5: nano tomcat-users.xml Add following at the end of the file before ""
<!doctype html> | |
<title>Site Maintenance</title> | |
<style> | |
body { text-align: center; padding: 150px; } | |
h1 { font-size: 50px; } | |
body { font: 20px Helvetica, sans-serif; color: #333; } | |
article { display: block; text-align: left; width: 650px; margin: 0 auto; } | |
a { color: #dc8100; text-decoration: none; } | |
a:hover { color: #333; text-decoration: none; } | |
</style> |
Scalability - You can load balance multiple instances of your application behind front end server. This will allow you to handle more volume, and increase stability in the event one of your instances goes down.
Security - Apache, Tomcat, and Glassfish all support SSL, but if you decide to use Apache, most likely thats where you should configure it. If you want additional protection against attacks (DoS, XSS, SQL injection, etc.) you can install the mod_security web application firewall.
Additional Features - Apache has a bunch of nice modules available for URL rewriting, interfacing with other programming languages, authentication, and a ton of other stuff.
Clustering - By using Apache HTTP as a front end you can let Apache HTTP act as a front door to your content to multiple Apache Tomcat instances. If one of your Apache Tomcats fails, Apache HTTP ignores it and your Sysadmin can sleep through the nigh
Setup a Log Management Solution with the ELK Stack
Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.
Aside from isolating feature development, branches make it possible to discuss changes via pull requests. Once someone completes a feature, they don’t immediately merge it into upstream branch (i.e. develop/master). Instead, they push the feature branch to the central server and file a pull request asking to merge their additions into upstream (i.e. develop/master). This gives other developers an opportunity to review the changes before they become a part of the main codebase.
Code review is a major benefit of pull requests, but they’re actually designed to be a generic way to talk about code. You can think of pull requests as a discussion dedicated to a particular branch. This means that they can also be used much earlier in the development process. For example, if a developer needs help with a particular feature, all they have to do is file a pull request. Interested parties will be notified automaticall
- http://zeroturnaround.com/rebellabs/object-oriented-design-principles-and-the-5-ways-of-creating-solid-applications/
- http://blog.gauffin.org/2012/05/solid-principles-with-real-world-examples/
When an application becomes a festering mass of code that the developers find increasingly hard to maintain.
Gross. So how can we identify future code rot? These signs probably indicate code rot to come
- Rigidity – small changes causes the entire system to rebuild.
Fact: docker is an abstraction on top of LXC Containers. This means you can run an isolated “virtual machine” (container) within your Linux-based distro. Containers are extremely lightweight and you can start a new container very quickly. You can use containers to run your websites, databases, etc, in a isolated environment. In theory you should be able to use containers to run low-latency operations such as an individual database transaction
-
Worth Reading! 12 Factor
In order for authentication to work properly both SP(FMU in this case) and the IdP(Säkerhetstjänster) must publish a meta-data document to each other. These documents are needed in order to redirect the User Agent(a user trying to log in) through the authentication flow. They also contain certificate data to sign/validate statements issued between the entities.
The meta-data for the IdP is published on a URL depending on what environment is needed (test, acctest, prod, etc.). Let's assume we're going for acctest.
- Download the meta-data from: https://idp2.acctest.sakerhetstjanst.inera.se/idp/saml
- Save this file in the project (e.g. metadata/siths.xml)
Key patterns are organized into categories such as Caching, Exception Management, Logging and Instrumentation, Page Layout, Presentation, Request Processing, and Service Interface Layer; as listed below:
Exception Management - Exception Shielding: Filter exception data that should not be exposed to external systems or users.
Logging and Instrumentation - Provider: Implement a component that exposes an API that is different from the client API, to allow any custom implementation to be seamlessly plugged in.
Caching - Cache Dependency: Use external information to determine the state of data stored in a cache.
Caching - Page Cache: Improve the response time for dynamic Web pages that are accessed frequently but change less often and consume a large amount of system resources to construct.
Docker, the new trending containerization technique, is winning hearts with its lightweight, portable, “build once, configure once and run anywhere” functionalities.
An instance of an image is called container. You have an image, which is a set of layers. If you start this image, you have a running container of this image. You can have many running containers of the same image.
So a running image is a container.
- Lists only running containers:
sudo docker ps