Multitenancy support with IBM SDK,7R1


The multitenant JVM recently became available as part of the IBM SDK Java™ Technology Edition, Version 7 Release 1 as a tech preview. This is a mechanism for sharing run-time resources across Java virtual machine instances.

Cloud system fasten application processing and reduces memory usage by running multiple applications together within a single multitenant JVM. According to cloud providers, there are two popular multitenant architectures – shared or no shared multitenant architectures. For the shared architecture, underlying hardware and software will be same. In case of no shared multitenant architecture, the complete set of hardware and software is fully dedicated to single customer only. Obviously, in shared architecture the overall cost involved is a lot lesser.

  • Besides reducing processing time, better isolation is achieved between tenants and applications being shared on JVM.
  • Reduces applications’ start times as subsequent applications take less time to start when JVM is running already.
  • Reduces overall cost as single set of hardware and software is shared by multiple tenants.

Multitenant JVM Vs  Multiple standard JVMs.

Instead of using multitenant JVM, developer can use multiple JVMs. This associate with with memory consumption issues.

  • The Java heap consumes hundreds of megabytes of memory. Heap objects cannot be shared between JVMs, even when the objects are identical. Furthermore, JVMs tend to use all of the heap that’s allocated to them even if they need the peak amount for only a short time.
  • The Just-in-time (JIT) compiler consumes tens of megabytes of memory, because generated code is private and consumes memory. Generated code also takes significant processor cycles to produce, which steals time from applications.
  • Internal artifacts for classes (many of which, such as String and Hashtable, exist for all applications) consume memory. One instance of each of these artifacts exists for each JVM.
  • Each JVM has a garbage-collector helper thread per core by default and also has multiple compilation threads. Compilation or garbage-collection activity can occur simultaneously in one or more of the JVMs, which can be suboptimal as the JVMs will compete for limited processor time.

With this cons, maximum number of concurrent applications that can be run on multitenat JVM is improve nearly 5X.

Application Description Improvement with multitenant JVM
Hello World Print “HelloWorld” and then sleep 4.2X to 4.9X
Jetty Start Jetty and wait for requests 1.9X
Tomcat Start Tomcat and wait for requests 2.1X
JRuby Start JRuby and wait for requests 1.2X to 2.1X

Using the multitenant JVM

import java.io.*;

public class HelloFile {
  public static void main(String[] args) throws IOException {
    try(PrintStream out = new PrintStream("hello.txt")) {
      out.println("Hello, Tenant!");
    }
  }
}

Compiling and invoking the above program:

$ javac HelloFile.java
$ java -Xmt HelloFile

Resource constraints
The multitenant JVM provides controls that can be configured to limit a tenant’s ability to misbehave and use resources in a way that affects other tenants. Values that can be controlled include:

  • Processor time
  • Heap size
  • Thread count
  • File I/O: read bandwidth, write bandwidth
  • Socket I/O: read bandwidth, write bandwidth

These controls can be specified in the -Xmt command line. For example:

  • -Xlimit:cpu=10-30 (10 percent minimum CPU, 30 percent maximum)
  • -Xlimit:cpu=30 (30 percent maximum CPU)
  • -Xlimit:netIO=20M (maximum bandwidth of 20 Mbps)
  • -Xms8m-Xmx64m (initial 8 MB heap, 64 MB maximum)

 

Documented limitations

Multitenancy cannot be applied arbitrarily to Java applications. There are documented limitations.

  • Native libraries (including GUIs like SWT)
  • Debuggers and profilers

 

 

Web Services Authentication Security Patterns


In the development of secure applications, patterns are useful in the design of security functionality. Mature security products or frameworks are usually employed to implement such functionality. One of the most exciting developments in software engineering is the
emergence of design patterns as an approach to capturing, reusing, and
teaching software design expertise.Yet, without a deeper comprehension of these products, the implementation of security patterns is difficult, as a non-guided implementation leads to non-deterministic results. Security engineering aims for a consecutive secure software development by introducing methods, tools, and activities into a software development process.

There are two main patterns (when consider architecture) for authentication. Both pattern focus on the relationships that exist between a client and service participating in a Web
service interaction.
1.Direct authentication
The Web service acts as an authentication service to validate credentials from the client. The credentials, which include proof-of-possession that is based on shared secrets, are verified against an identity store.
2.Brokered authentication
The Web service validates the credentials presented by the client, without the need for a direct relationship between the two parties. An authentication broker that both parties trust independently issues a security token to the client. The client can then present credentials, including the security token, to the Web service.

Considering design Brokered authentication can be subcategorize…

Brokered Authentication: Kerberos
Use the Kerberos protocol to broker authentication between clients and Web services.

The Web Server has to hand-shake with browser to obtain kerberos token. The token can be validated against keytab file  or connecting through Active Directory.
The below diagram explains how the handshake happens between browser and webserver to obtain kerberos token for authentication.

Brokered Authentication: X.509 PKI
Use brokered authentication with X.509 certificates issued by a certificate authority (CA) in a public key infrastructure (PKI) to verify the credentials presented by the requesting application.

The X.509, PKI X.509 and Public Key Cryptography Standards (PKCS) are the building blocks a PKI system that defines the standard formats for certificates and their use.A typical X.509 standard digital certificate has following format,

Brokered Authentication: STS
Use brokered authentication with a security token issued by a Security Token Service (STS). The STS is trusted by both the client and the Web service to provide interoperable security tokens.
The Security Token Service, based on WS-Trust specification addresses the token translation challenge where a web service or client can translate one token to another token. Security Token Service should be part of Web Services Security Architecture and it acts as a broker in translating one token to another token format. Whether or not you have a WS-Security product (which may or may not have STS), your Security Architecture should consider STS as a key Architecture building block.

Client Applications + STS + WS-Security Gateway == SOAP Message with appropriate authentication token.

With the above architecture, you actually delegate the token translation and certain cryptographic key management to a central service. STS can be extended to include any number of input and output token formats without affecting the client applications and removing redundant code across various client applications.

 

Fog Before The Cloud

Cisco working on carve out a new computing category introduce as Fog Computing by combining two existing categories “Internet of Things” + “cloud computing”. Fog computing, also known as fogging, is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.

 (When people talk about “edge computing,” what they literally mean is the edge of the network, the periphery where the Internet ends and the real world begins. Data centers are in the “center” of the network, personal computers, phones,surveillance cameras and  IoT devices are on the edge.)

The problem of how to get things done when we’re dependent on the cloud is becoming all the more acute as more and more objects become “smart,” or able to sense their environments, connect to the Internet, and even receive commands remotely. Everything from jet engines to refrigerators is being pushed onto wireless networks and joining the “Internet of Things. Modern 3G and 4G cellular networks simply aren’t fast enough to transmit data from devices to the cloud at the pace it is generated, and as every mundane object at home and at work gets in on this game, it’s only going to get worse unless bandwidth increasing.

If devices at the network routing can be self learning, organizing and healing it will decentralize the network.  Cisco wants to turn its routers into hubs for gathering data and making decisions about what to do with it. In Cisco’s vision, its smart routers will never talk to the cloud unless they have to—say, to alert operators to an emergency on a sensor-laden rail car on which one of these routers acts as the nerve center.

Fog Computing can enable a new breed of aggregated applications and services, such as smart energy distribution. This is where energy load-balancing applications run on network edge devices that automatically switch to alternative energies like solar and wind, based on energy demand, availability, and the lowest price.

Fog_Computing1

The Fog computing applications and services include :

  • Interplay between the Fog and the Cloud. Typically, the Fog platform supports real-time, actionable analytics, processes, and filters the data, and pushes to the Cloud data that is global in geographical scope and time.
  • Data collection and analytics (pulled from access devices, pushed to Cloud)
  • Data storage for redistribution (pushed from Cloud, pulled by downstream devices)
  • Technologies that facilitate data fusion in the above contexts.
  • Analytics relevant for local communities across various verticals (ex: advertisements, video analytics, health care, performance monitoring, sensing etc.)
  • Methodologies, Models and Algorithms to optimize the cost and performance through workload mobility between Fog and Cloud.

Another example are smart traffic lights. A video camera senses an ambulance’s flashing lights and then automatically changes streetlights for the vehicle to pass through traffic. Also through Fog Computing, sensors on self-maintaining trains can monitor train components. If they detect trouble, they send an automatic alert to the train operator to stop at the next station for emergency maintenance.

Can We Trust Endpoint Security ?

 

Endpoint security is an approach to network protection that requires each computing device on a corporate network to comply with certain standards before network access is granted. Endpoints can include PCs, laptops, smart phones, tablets and specialized equipment such as bar code readers or point of sale (POS) terminals.

Endpoint security systems work on a client/server model in which a centrally managed server or gateway hosts the security program and an accompanying client program is installed on each network device. When a client attempts to log onto the network, the server program validates user credentials and scans the device to make sure that it complies with defined corporate security policies before allowing access to the network.

When it comes to endpoint protection,  information security professionals believe that their existing security solutions are unable to prevent all endpoint infections, and that anti-virus solutions are ineffective against advanced targeted attacks. Overall, end-users are their biggest security concern.

“The reality today is that existing endpoint protection, such as anti-virus, is ineffective because it is based on an old-fashioned model of detecting and fixing attacks after they occur,” said Rahul Kashyap, chief security architect at Bromium, in a statement. “Sophisticated malware can easily evade detection to compromise endpoints, enabling cybercriminals to launch additional attacks that penetrate deeper into sensitive systems. Security professionals should explore a new paradigm of isolation-based protection to prevent these attacks.”

Saltzer’s and Schroeder’s design principles ( http://nob.cs.ucdavis.edu/classes/ecs153-2000-04/design.html ) provides us with an opportunity to reflect on the protection mechanisms that we employ (as well as some principles that we may have forgotten about). Using these to examine AV’s effectiveness as a protection mechanism leads us to conclude that AV, as a protection mechanism, is a non-starter.

That does not mean that AV is completely useless — on the contrary, its utility as a warning or detection mechanism that primary protection mechanisms have failed is valuable — assuming of course that there is a mature security incident response plan and process in place (i.e. with proper post incident review (PIR), root cause analysis (RCA) and continual improvement process (CIP) mechanisms).

Unfortunately, many organisations employ AV as a primary endpoint defense against malware. But that is not all: their expectation of the technology is not only to protect, but to perform remediation as well. They “outsource” the PIR, RCA and CIP to the AV vendor. The folly of their approach is painfully visible as they float rudderless from one malware outbreak to the next.

There are many alternatives for endpoint security, such as Applocker, LUA, SEHOP, ASLR and DEP are all freely provided by Microsoft. So is removing users’ administrative rights (why did we ever give it to them in the first place?).

Other whitelisting technologies worthy of consideration are NAC (with remediation) and other endpoint compliance checking tools, as well as endpoint firewalls in default deny mode.