Spring boot with Apache CXF

In a recent project which I have worked, encountered interesting mix of spring, cxf, and camel. With seamless integration with cxf and camel, it helped to build a simple ESB server with this tech stack.  Here I’m trying to implement the same module.

As a start, I’m going ahead setting up spring boot with cxf with reference to Apache cxf documentation[1] and a basic rest service implementation. You can find the Implemenation at github[2]

In an abstract note the basic pattern.  Here CXF solely act as a proxy. It’s components help to add different properties and tweak request flow.



[1] http://cxf.apache.org/docs/springboot.html

[2] https://github.com/lsdeva/cxfrest/tree/basicImpl


Effectiveness of Change Freezes

“Freezes” have two objectives:
Reduce the potential for impact (unsuccessful change)
Allow for reduced staffing (holidays, significant projects, etc.)

I would say they are “moderately” successful, but I would also say they are only “moderately” enforced. We tend to allow a lot of exceptions during freeze periods, although they do require enhanced approval.

For most recent freeze, unsuccessful changes dropped from around 6% to 0%, so from that aspect, success! I would attribute this to enhanced scrutiny and reduction in high-risk changes The overall quantity of changes (and therefore the load on delivery staff) was virtually unchanged, however. This is due to a lot of projects (not directly tied to production services) had kind of an “end of year rush” to get changes in.


There are certainly challenges around the data. It is particularly difficult to show direct causation in a complex system like our modern technical environments.

One step closer to user – Production Support (Part 2)



Ok Lets continue. If you missed the first part check here.

So far we talk about What is PS and what kind of qualities do you need to be successful in this field. This field standardized by ITIL certification. It will be handy if you are willing to build your career in this PS field. Very first day, when I had a tea with my manager, he advised me to complete ITIL certification for better career growth.

There around 4 levels of support role defined in ITIL as follows,

L1 (Level 1) Support:

Role: Software engineer, Support analyst

Responsibilities: Support ticket classification, Re-prioritization, User clarification, password reset, Response to user queries based on known error database. Ticket escalation to the next level of support.

L2 (level 2) Support:

Role: Sr. Software engineer, Technical Analyst, Functional consultant

Responsibilities: Issue analysis, Identification of bug and the workaround. Response to functional queries, Understanding & analysis of user requirement for minor enhancements.

L3 (level 3) Support:

Role: Technical Lead, Sr. Functional consultant

Responsibilities: Permanent bug fixes by doing root cause analysis, code changes. Regular ticket analysis of recurring issues can result in a decision of permanent fix as code change. Minor enhancements to enhance user experience and reducing clarification calls.

Functional Consultant: Resolving user’s functional queries, requirement gathering, coordination with the tech team.

L4 (Level 4) Support:

Product support through Vendor / OEM. L4 level deals with the issue which may be because of a fault in the associated product with the application e.g. Operating system or the application framework.

~ Even though this is a decade old industry you will see failures. The findings of the Fujitsu research illustrate why application support is so necessary:

• More than half of CIOs would not be able to show their chief financial officer that they know all the applications sitting on their IT infrastructure.

• Two-thirds of CIOs felt that their applications portfolio was only partially aligned to their business strategy or not at all.

• Only 39% of CIOs said they had the right strategy to manage their organization’s portfolio.

• Nearly two-thirds of CIOs could not provide the true cost of running applications in their business.

• 47% of CIOs said they did not have the resources to ensure maximum value from their applications.

• Half of CIOs said the level of duplication was moderate to “too much”.

(I’ll do share my personal experience in this sub-topic in a future post)


My Work experience so far

The role gave me experience in the following:

• Supporting various _______  applications and platforms used in the  ________ department

• Solving problems around breaks, data feeds, risk figures.

• Liaising between front office, product control, risk and strategy to deliver two essential tools for _______ data analysis.

• Migration projects, UAT, parallel testing, and regression testing

• Resolving urgent and immediate requests by various users in a vibrant and demanding environment

• Advanced use of EXCEL formulae.

• Team working, time management, communication and problem-solving skills were significantly enhanced.

• Exposure to the organizational structure of investment banks.

• Further understanding of financial processes and terms that affect our economy.


IF you have gone both of my posts, let me answer some FAQs to help you out from further googling.

What is future of Application/Production Support guy in IT industry? Last recession in 2008, most of the companies put on hold most of their planned development projects. But didn’t cut much of their budget for the support / maintenance work.

Is production support a good choice to start a career? Being a fresher, I  would strongly suggest you go for a development project. Because only in dev you can learn completely.This does not mean that you cant learn in production support, you can.. but the learning will be different – like UNIX related, application troubleshooting related etc.. which is NOT you look for something at this point in your career(As a fresher).
Also, remember moving from dev to production support will be very easy, but the reverse is CHALLENGING.



One step closer to user – Production Support (Part 1)

If you are familiar with the usual Software development project flow you would know, BA will get a bunch of requirements to be implemented from Client. End of implementation phase(s) there will be a day the software would launch to production environment through many sleepless nights for dev team.  What comes after that?

If the software is a long running business critical application, there will be support services which need to be provided at least  99% up time.  In the business world, this support phase known as the Production Support.  Throughout many years of molding, this phase standardized and I had a greate opportunity to experience the full phase of production support.

Let me share some what I have learned so far,  First of all some two jargons.

Production support – you are responsible for all types of issues like connectivity, infrastructure maintenance, components(not single application) functionality etc
Application support – you are responsible for issues specific to a particular application instead of the whole environment. You will have to work only when there is something wrong with your application.

Today, all of the complex business processes are supported by computer software and hardware. However, just as people are susceptible to making mistakes, software and hardware make errors, too. Therefore, every company must have an Application Support Team to ensure that these business applications run successfully and are error free.

Supporting of applications is critical for three quarters of organisations, but over half (53%) are struggling to maintain and manage their portfolios. Latest research from Fujitsu confirms the need for better asset management and qualified application support analysts.

From the outside, it might seem as if the application support group fixes errors when users complain. and not much more. I have heard a manager state that the support people basically put their fingers in a hole in the dam when a leak springs up. This perception is not correct. Actually the support staff provides a number of services, and has a number of responsibilities to ensure that applications remain in good working order.

With application support a dynamic career track with many opportunities.

What do application support analysts do?
They fix application and system problems, or any incident that is disrupting the application service that business users depend on. The job calls for both technical capability and business understanding. Crucially, applications are production, or live, issues and need immediate attention: an unflappable temperament is a must.

What does good communication consist of?
It goes without saying that application support analysts need excellent communication skills – but what exactly does that mean? First, of course, is the ability to express yourself well, verbally and on paper or email. You also need an acute understanding that other people within the business depend on your services, and know how to respond to that dependency. This may be via acknowledgement, updates and resolution.

Core tech competencies
An application support analyst needs to demonstrate competent IT literacy around applications and systems. Core technical areas are databases and SQL, and operating system platforms such as UNIX, especially Solaris, and Windows. Delivering live IT environments that enable the business every day is a challenging and dynamic career with many opportunities.

Six further competencies

These additional capabilities will ensure success in building a support analyst career:

• Technical knowledge

• Business awareness

• Cultural awareness

• Service awareness, preferably IT Infrastructure library (ITIL) certification

• Investigation and diagnostic skills (the Sherlock Holmes factor)

• Support tool knowledge

Six personal attributes

Application support staff, particularly those within blue chip companies, cite the following attributes as contributing to success:

• Communication skills and active listening

• Empathy with users

• Acceptance of ownership

• Patience and understanding

• Investigation & diagnostic skills (more of the Sherlock factor)

• Language skills (in some cases)

Let’s talk further in the second post.

Big Data Analytics – late guide

BD is an old news  if you are being in the cutting edge tech. But it seems like some people realised it’s kinda handy skill set to have in resume after all and start to engage .  For latecomers, this may be helpfull tips. There are a 51 best tips around on the internet for BD.  In my experience, I would like to help out with abstracting those tips.

The full article about 51 expert tips for learning big data analytics was written by Molly Galetto. You can find 4 sections in this article.

Big data is everywhere, and small businesses and enterprises alike are making strides in transforming business outcomes through effective big data analytics. For today’s marketing and IT professionals, big data analytics is rapidly becoming an essential yet multi-faceted skill, and those who master big data analytics play a critical role in transforming their companies into data-driven organisations.


Why Master Big Data Analytics?


1. Big data creates career advancement opportunities for IT and other professionals. “Big data is definitely creating tremendous opportunities for the IT pros that know and understand it. That could be in a new role such as a data engineer or simply in a revision of an existing job description — one that makes you more versatile and less dispensable to your employer and will likely generate unexpected opportunities down the road.

“Where do you add these magical skills, especially if your employer isn’t offering training in them? The Internet, of course. Education and skills training has experienced its own share of change lately, and there’s plenty of upside for the knowledge-thirsty IT pro: Loads of readily available, online classes for developing new skills across the technical spectrum. Best of all, many of these learning opportunities come at no cost to students — so the only thing you’re really putting on the line is your time and energy. Admittedly, those are not finite resources — but you can tackle new learning and career advancement chances with minimal risks.” – Kevin Casey, 10 Big Data Online Courses.


You can learn more about the 12 others tips of this section here.


Get an Education in Big Data Analytics


14. Consider a two-year Master’s degree program focused on Big Data analytics. “It’s well documented that there’s a big data talent gap, but what’s being done about it? What’s needed is knowledge and experience. On the first front, hundreds of colleges and universities worldwide are gearing up business analytics, machine learning and other programs aimed at analysis of data in a business context.” – Doug Henschen, Big Data Analytics Master’s Degrees: 20 Top Programs.


You can find more information about the 7 others tips of this section here.


Essential Languages and Skills to Master


21. There are several essential tools of the trade anyone interested in a career in big data analytics should master. “SAS, SPSS, R, and SQL. Start with any tool that you can get access to. Sometimes you will be surprised to find that a Tool that you thought did not exist in your organization actually does. In one of my previous jobs, when I was busy negotiating with SAS for licenses for my team, a colleague of mine, who was an Actuary told me that he had seen a SAS session in one his team member’s PC, sometime back. I followed up with that team member and we found that we had a SAS server already in place waiting to be used!

“Learning is not about knowing everything, but learning substantial portions thoroughly and gaining sound knowledge about what you learn. I would much prefer a candidate who knows a lot about how to run a regression in SPSS, than a person who has half baked knowledge (knows a little bit about CHAID, done a little bit of regression, knows a little bit of SAS and a little bit of SPSS) If you can master one tool and a few modules/techniques of the tool, then you stand a better chance of getting a job and also of being able to get a job done.

“Pick up a tool that is available easily to you and start learning it – SAS, SPSS, R (now available as open source).

“I do not recommend using pirated software though they are now openly available in the market.” – Snehamoy Mukherjee, 5 Tips to build a Career in Analytics and Big Data!


For more explanation about the 12 others tips of this section click here.


Tips for Mastering Big Data Analytics


33. If you’re a business or marketing professional without an in-depth knowledge of the technical jargon typically used in big data analytics tutorials and courses, you can still master big data analytics if you know where to look for the right learning materials. “Intrigued by analytics? Wish you knew more about it? A lot of people search for information, and land on sites that are, well, too geeky. They’re aimed at programmers, people who pride themselves on knowing all the intricacies of their favorite software, or (eek!) math majors. These are not good source for business people aiming to get a grip on the topic.

“Maybe you’ve come across ESPN ’s FiveThirtyEight.  This is the right kind of reading for you. These articles, written in normal human English (ok, much better than normal), can be read and understood by any educated adult. Great. Still, there’s a much wider range of analytics topics, and viewpoints, on the web that business readers can understand and put to good use. It’s a matter of knowing where to look.” – Meta S. Brown, 6 (OK, 7) Big Data and Analytics Learning Resources That Business P…, Forbes.


You can learn more about the 19 others tips of this section here.

DSC Resources

Additional Reading

TPCx-BB New Data Analytics and Machine Learning Benchmark


A new data analytics and machine learning benchmark has been released by the Transaction Processing Performance Council (TPC) measuring real-world performance of Hadoop-based systems, including MapReduce, Apache Hive, and Apache Spark Machine Learning Library (MLlib).

Called the TPCx-BB benchmark and downloadable at the TPC site, it executes queries frequently performed by companies in the retail industry running customer behavior analytics.

The TPCx-BB (BB stands for “Big Benchmark”) is designed to incorporate complex customer analytical requirements of retailers. Whereas online retailers have historically recorded only completed customer transactions, today deeper insight is needed into consumer behavior, with relatively straightforward shopping basket analysis replaced by detailed behavior modeling. According to the TPC, the benchmark compares various analytics solutions in a real-world scenario, providing performance-vs.-cost tradeoffs.

The benchmark tests various data management primitives – such as selects, joins and filters – and functions. Where necessary, it utilizes procedural programs written using Java, Scala and Python. For use cases requiring machine learning data analysis techniques, the benchmark utilizes Spark MLLIB to invoke machine learning algorithms by providing an input dataset to the algorithms processed during the data management phase.

The benchmark exercises the compute, I/O, memory and efficiency of various Hadoop software stacks (Hive, MapReduce, Spark, Tez) and runs tasks resembling applications developed by an end-user with a cluster deployed in a datacenter, providing realistic usage of cluster resources.

It also utilizes, when necessary, procedural programs written using Java, Scala and Python. For machine learning use cases, the benchmark utilizes Spark MLLIB to invoke machine learning algorithms during the data management phase.

Other phases of the benchmark include:

Load: tests how fast raw data can be read from the distributed file system, permuted by applying various optimizations, such as compression, data formats (ORC, text, Parquet).

Power: tests the system using short-running jobs with less demand on cluster resources, and long-running jobs with high demand on resources.

Throughput: tests the efficiency of cluster resources by simulating a mix of short and long-running jobs, executed in parallel.

For the record, according to HPE, the 12-node Proliant cluster used in the first test run on the benchmark had three master/management nodes and nine worker nodes with RHEL 6.x OS and CDH 5.x Hadoop Distribution. It ran a dataset of about 3TB. Comparing current- versus previous-generation Proliant servers, HPE reported a 27 percent performance gain and cost reduction of 9 percent.