Optimizing Power Usage in Single Chip Cloud Computer

‘Intel Single Chip Cloud’ computer is the best flexible multi cores (48 cores) research platform designed to run parallel programs. Optimizing power usage of SCC chip is a challenging goal. Research was focus on achieving it with Baremetal framework (without standard OS).

Read more in below paper

Optimizing Power Usage in Single Chip Cloud Computer

 

Advertisements

One step closer to user – Production Support (Part 2)

 

 

Ok Lets continue. If you missed the first part check here.

So far we talk about What is PS and what kind of qualities do you need to be successful in this field. This field standardized by ITIL certification. It will be handy if you are willing to build your career in this PS field. Very first day, when I had a tea with my manager, he advised me to complete ITIL certification for better career growth.

There around 4 levels of support role defined in ITIL as follows,

L1 (Level 1) Support:

Role: Software engineer, Support analyst

Responsibilities: Support ticket classification, Re-prioritization, User clarification, password reset, Response to user queries based on known error database. Ticket escalation to the next level of support.

L2 (level 2) Support:

Role: Sr. Software engineer, Technical Analyst, Functional consultant

Responsibilities: Issue analysis, Identification of bug and the workaround. Response to functional queries, Understanding & analysis of user requirement for minor enhancements.

L3 (level 3) Support:

Role: Technical Lead, Sr. Functional consultant

Responsibilities: Permanent bug fixes by doing root cause analysis, code changes. Regular ticket analysis of recurring issues can result in a decision of permanent fix as code change. Minor enhancements to enhance user experience and reducing clarification calls.

Functional Consultant: Resolving user’s functional queries, requirement gathering, coordination with the tech team.

L4 (Level 4) Support:

Product support through Vendor / OEM. L4 level deals with the issue which may be because of a fault in the associated product with the application e.g. Operating system or the application framework.

~ Even though this is a decade old industry you will see failures. The findings of the Fujitsu research illustrate why application support is so necessary:

• More than half of CIOs would not be able to show their chief financial officer that they know all the applications sitting on their IT infrastructure.

• Two-thirds of CIOs felt that their applications portfolio was only partially aligned to their business strategy or not at all.

• Only 39% of CIOs said they had the right strategy to manage their organization’s portfolio.

• Nearly two-thirds of CIOs could not provide the true cost of running applications in their business.

• 47% of CIOs said they did not have the resources to ensure maximum value from their applications.

• Half of CIOs said the level of duplication was moderate to “too much”.

(I’ll do share my personal experience in this sub-topic in a future post)

 

My Work experience so far

The role gave me experience in the following:

• Supporting various _______  applications and platforms used in the  ________ department

• Solving problems around breaks, data feeds, risk figures.

• Liaising between front office, product control, risk and strategy to deliver two essential tools for _______ data analysis.

• Migration projects, UAT, parallel testing, and regression testing

• Resolving urgent and immediate requests by various users in a vibrant and demanding environment

• Advanced use of EXCEL formulae.

• Team working, time management, communication and problem-solving skills were significantly enhanced.

• Exposure to the organizational structure of investment banks.

• Further understanding of financial processes and terms that affect our economy.

 

IF you have gone both of my posts, let me answer some FAQs to help you out from further googling.

What is future of Application/Production Support guy in IT industry? Last recession in 2008, most of the companies put on hold most of their planned development projects. But didn’t cut much of their budget for the support / maintenance work.

Is production support a good choice to start a career? Being a fresher, I  would strongly suggest you go for a development project. Because only in dev you can learn completely.This does not mean that you cant learn in production support, you can.. but the learning will be different – like UNIX related, application troubleshooting related etc.. which is NOT you look for something at this point in your career(As a fresher).
Also, remember moving from dev to production support will be very easy, but the reverse is CHALLENGING.

 

 

One step closer to user – Production Support (Part 1)

If you are familiar with the usual Software development project flow you would know, BA will get a bunch of requirements to be implemented from Client. End of implementation phase(s) there will be a day the software would launch to production environment through many sleepless nights for dev team.  What comes after that?

If the software is a long running business critical application, there will be support services which need to be provided at least  99% up time.  In the business world, this support phase known as the Production Support.  Throughout many years of molding, this phase standardized and I had a greate opportunity to experience the full phase of production support.

Let me share some what I have learned so far,  First of all some two jargons.

Production support – you are responsible for all types of issues like connectivity, infrastructure maintenance, components(not single application) functionality etc
Application support – you are responsible for issues specific to a particular application instead of the whole environment. You will have to work only when there is something wrong with your application.

Today, all of the complex business processes are supported by computer software and hardware. However, just as people are susceptible to making mistakes, software and hardware make errors, too. Therefore, every company must have an Application Support Team to ensure that these business applications run successfully and are error free.

Supporting of applications is critical for three quarters of organisations, but over half (53%) are struggling to maintain and manage their portfolios. Latest research from Fujitsu confirms the need for better asset management and qualified application support analysts.

From the outside, it might seem as if the application support group fixes errors when users complain. and not much more. I have heard a manager state that the support people basically put their fingers in a hole in the dam when a leak springs up. This perception is not correct. Actually the support staff provides a number of services, and has a number of responsibilities to ensure that applications remain in good working order.

With application support a dynamic career track with many opportunities.

What do application support analysts do?
They fix application and system problems, or any incident that is disrupting the application service that business users depend on. The job calls for both technical capability and business understanding. Crucially, applications are production, or live, issues and need immediate attention: an unflappable temperament is a must.

What does good communication consist of?
It goes without saying that application support analysts need excellent communication skills – but what exactly does that mean? First, of course, is the ability to express yourself well, verbally and on paper or email. You also need an acute understanding that other people within the business depend on your services, and know how to respond to that dependency. This may be via acknowledgement, updates and resolution.

Core tech competencies
An application support analyst needs to demonstrate competent IT literacy around applications and systems. Core technical areas are databases and SQL, and operating system platforms such as UNIX, especially Solaris, and Windows. Delivering live IT environments that enable the business every day is a challenging and dynamic career with many opportunities.

Six further competencies

These additional capabilities will ensure success in building a support analyst career:

• Technical knowledge

• Business awareness

• Cultural awareness

• Service awareness, preferably IT Infrastructure library (ITIL) certification

• Investigation and diagnostic skills (the Sherlock Holmes factor)

• Support tool knowledge

Six personal attributes

Application support staff, particularly those within blue chip companies, cite the following attributes as contributing to success:

• Communication skills and active listening

• Empathy with users

• Acceptance of ownership

• Patience and understanding

• Investigation & diagnostic skills (more of the Sherlock factor)

• Language skills (in some cases)

Let’s talk further in the second post.

Big Data Analytics – late guide

BD is an old news  if you are being in the cutting edge tech. But it seems like some people realised it’s kinda handy skill set to have in resume after all and start to engage .  For latecomers, this may be helpfull tips. There are a 51 best tips around on the internet for BD.  In my experience, I would like to help out with abstracting those tips.

The full article about 51 expert tips for learning big data analytics was written by Molly Galetto. You can find 4 sections in this article.

Big data is everywhere, and small businesses and enterprises alike are making strides in transforming business outcomes through effective big data analytics. For today’s marketing and IT professionals, big data analytics is rapidly becoming an essential yet multi-faceted skill, and those who master big data analytics play a critical role in transforming their companies into data-driven organisations.

 

Why Master Big Data Analytics?

 

1. Big data creates career advancement opportunities for IT and other professionals. “Big data is definitely creating tremendous opportunities for the IT pros that know and understand it. That could be in a new role such as a data engineer or simply in a revision of an existing job description — one that makes you more versatile and less dispensable to your employer and will likely generate unexpected opportunities down the road.

“Where do you add these magical skills, especially if your employer isn’t offering training in them? The Internet, of course. Education and skills training has experienced its own share of change lately, and there’s plenty of upside for the knowledge-thirsty IT pro: Loads of readily available, online classes for developing new skills across the technical spectrum. Best of all, many of these learning opportunities come at no cost to students — so the only thing you’re really putting on the line is your time and energy. Admittedly, those are not finite resources — but you can tackle new learning and career advancement chances with minimal risks.” – Kevin Casey, 10 Big Data Online Courses.

 

You can learn more about the 12 others tips of this section here.

 

Get an Education in Big Data Analytics

 

14. Consider a two-year Master’s degree program focused on Big Data analytics. “It’s well documented that there’s a big data talent gap, but what’s being done about it? What’s needed is knowledge and experience. On the first front, hundreds of colleges and universities worldwide are gearing up business analytics, machine learning and other programs aimed at analysis of data in a business context.” – Doug Henschen, Big Data Analytics Master’s Degrees: 20 Top Programs.

 

You can find more information about the 7 others tips of this section here.

 

Essential Languages and Skills to Master

 

21. There are several essential tools of the trade anyone interested in a career in big data analytics should master. “SAS, SPSS, R, and SQL. Start with any tool that you can get access to. Sometimes you will be surprised to find that a Tool that you thought did not exist in your organization actually does. In one of my previous jobs, when I was busy negotiating with SAS for licenses for my team, a colleague of mine, who was an Actuary told me that he had seen a SAS session in one his team member’s PC, sometime back. I followed up with that team member and we found that we had a SAS server already in place waiting to be used!

“Learning is not about knowing everything, but learning substantial portions thoroughly and gaining sound knowledge about what you learn. I would much prefer a candidate who knows a lot about how to run a regression in SPSS, than a person who has half baked knowledge (knows a little bit about CHAID, done a little bit of regression, knows a little bit of SAS and a little bit of SPSS) If you can master one tool and a few modules/techniques of the tool, then you stand a better chance of getting a job and also of being able to get a job done.

“Pick up a tool that is available easily to you and start learning it – SAS, SPSS, R (now available as open source).

“I do not recommend using pirated software though they are now openly available in the market.” – Snehamoy Mukherjee, 5 Tips to build a Career in Analytics and Big Data!

 

For more explanation about the 12 others tips of this section click here.

 

Tips for Mastering Big Data Analytics

 

33. If you’re a business or marketing professional without an in-depth knowledge of the technical jargon typically used in big data analytics tutorials and courses, you can still master big data analytics if you know where to look for the right learning materials. “Intrigued by analytics? Wish you knew more about it? A lot of people search for information, and land on sites that are, well, too geeky. They’re aimed at programmers, people who pride themselves on knowing all the intricacies of their favorite software, or (eek!) math majors. These are not good source for business people aiming to get a grip on the topic.

“Maybe you’ve come across ESPN ’s FiveThirtyEight.  This is the right kind of reading for you. These articles, written in normal human English (ok, much better than normal), can be read and understood by any educated adult. Great. Still, there’s a much wider range of analytics topics, and viewpoints, on the web that business readers can understand and put to good use. It’s a matter of knowing where to look.” – Meta S. Brown, 6 (OK, 7) Big Data and Analytics Learning Resources That Business P…, Forbes.

 

You can learn more about the 19 others tips of this section here.

DSC Resources

Additional Reading

TPCx-BB New Data Analytics and Machine Learning Benchmark

comp-2013-iss46-business-anual-image1

A new data analytics and machine learning benchmark has been released by the Transaction Processing Performance Council (TPC) measuring real-world performance of Hadoop-based systems, including MapReduce, Apache Hive, and Apache Spark Machine Learning Library (MLlib).

Called the TPCx-BB benchmark and downloadable at the TPC site, it executes queries frequently performed by companies in the retail industry running customer behavior analytics.

The TPCx-BB (BB stands for “Big Benchmark”) is designed to incorporate complex customer analytical requirements of retailers. Whereas online retailers have historically recorded only completed customer transactions, today deeper insight is needed into consumer behavior, with relatively straightforward shopping basket analysis replaced by detailed behavior modeling. According to the TPC, the benchmark compares various analytics solutions in a real-world scenario, providing performance-vs.-cost tradeoffs.

The benchmark tests various data management primitives – such as selects, joins and filters – and functions. Where necessary, it utilizes procedural programs written using Java, Scala and Python. For use cases requiring machine learning data analysis techniques, the benchmark utilizes Spark MLLIB to invoke machine learning algorithms by providing an input dataset to the algorithms processed during the data management phase.

The benchmark exercises the compute, I/O, memory and efficiency of various Hadoop software stacks (Hive, MapReduce, Spark, Tez) and runs tasks resembling applications developed by an end-user with a cluster deployed in a datacenter, providing realistic usage of cluster resources.

It also utilizes, when necessary, procedural programs written using Java, Scala and Python. For machine learning use cases, the benchmark utilizes Spark MLLIB to invoke machine learning algorithms during the data management phase.

Other phases of the benchmark include:

Load: tests how fast raw data can be read from the distributed file system, permuted by applying various optimizations, such as compression, data formats (ORC, text, Parquet).

Power: tests the system using short-running jobs with less demand on cluster resources, and long-running jobs with high demand on resources.

Throughput: tests the efficiency of cluster resources by simulating a mix of short and long-running jobs, executed in parallel.

For the record, according to HPE, the 12-node Proliant cluster used in the first test run on the benchmark had three master/management nodes and nine worker nodes with RHEL 6.x OS and CDH 5.x Hadoop Distribution. It ran a dataset of about 3TB. Comparing current- versus previous-generation Proliant servers, HPE reported a 27 percent performance gain and cost reduction of 9 percent.

New App Container Tools from CoreOS and Puppet

The expanding application container and micro-services infrastructure got another boost this week with the introduction of a new set of tools for managing distributed software used to orchestrate micro-services.

CoreOS, announced a new open source distributed storage system designed to provide scalable storage to clusters orchestrated by the Kubernetes container management platform.

Puppet, the IT automation specialist based in Portland, Ore., recently released a suite of tools under the codename Project Blueshift that provides modules for running container software from CoreOS, Docker and Mesosphere along with Kubernetes cluster manager. This week it released a new set of Docker images for running its software on the Docker Hub.

Blueshift software tools could now be deployed and run on top of Docker. Running within the application container platform makes it easier to scale Puppet.

A new agent to manage Linux virtual machines running on IBM z Systems and LinuxOne platforms. In addition, it announced new modules for IBM WebSphere application and integration middleware along with a module for supporting a Cisco System’s line of Nexus switches. The modules are intended to automate IT management while speeding application deployment across hybrid cloud infrastructure.

IBM Websphere module is available now, and a new agent with packages supporting Red Hat Enterprise Linux 6 along with SUSE Linux Enterprise Server 11 and 12 would be available later this summer.

Meanwhile, San Francisco-based CoreOS rolled out a new open source distributed storage effort this week designed to address persistent storage in container clusters. The company said its Torus distributed storage platform aims to deliver scalable storage for container clusters orchestrated by the Kubernetes container manager. A prototype version of Torus is available on GitHub.

CoreOS said Torus aims to solve common storage issues associated with running distributed applications. “While it is possible to connect legacy storage to container infrastructure, the mismatch between these two models convinced us that the new problems of providing storage to container clusters warranted a new solution,” the company noted in a statement announcing the open source storage effort.

Operating on the premise that large clusters of applications containers require persistent storage, CoreOS argues that storage for clusters of lightweight virtual machines must be uniformly available across a network as processing shifts among containers.

Torus runs on the CoreOS distributed key value store called etcd that is used to store data across a cluster of machines. The storage building block is deployed in “thousands” of production deployments, CoreOS claims. That configuration allows Torus to zero in on custom persistent storage configurations. The tool also is designed as a building block for delivering different types of storage, including distributed block devices or large object storage.

Supporting Your Data Management Strategy with a Phased Approach to Master Data Management- 2

Data Quality Management

Data governance encompasses the program management required to manage data consumer expectations and requirements, along with collaborative semantic metadata management. However, the operational nexus is the integration of data quality rules into the business process and application development life cycle. Directly embedding data quality controls into the data production workflows reduces the continual chore of downstream parsing, standardization and cleansing. These controls also alert data stewards to potential issues long before they lead to irreversible business impacts.

Engaging business data consumers and soliciting their requirements allows data practitioners to translate requirements into specific data quality rules. Data controls can be configured with rules and fully incorporated into business applications. Data governance procedures guide data stewards through the workflow tasks for addressing emerging data quality issues. Eliminating the root causes for introducing flawed data not only supports the master data management initiative, it also improves the overall quality of enterprise data. Data quality management incorporates tools and techniques for:

Data quality rules and standards. Providing templates for capturing, managing and deploying data quality rules – and the standards to which the data sets and applications must conform – establishes quantifiable measures for reporting quality levels. Since the rules are derived from data consumer expectations, the measures provide relevant feedback as to data usability.

Data quality controls. Directly integrating data quality controls as part of the application development process means that data quality is “baked in” to the application infrastructure. Enabling rule-based data validation ratchets data quality out of downstream reactive mode and helps data practitioners address issues within the context of the business application.

Monitoring, measurement and reporting. A direct benefit of data quality rules, standards and controls is the ability to continuously inspect and monitor data sets and data streams for any recognizable issues, and to alert the right set of people when a flaw is detected.

Data quality incident management and remediation. One of the most effective techniques for improving data quality is instituting a framework for reporting, logging and tracking the status of data quality issues within the organization. Providing a centrally managed repository with integrated workflow processes and escalation means that issues are not ignored. Instead, issues are evaluated, investigated and resolved either by addressing the cause or determining other changes to obviate the issue. The visibility into the point of failure (or introduction of a data error) coupled with the details of the data quality rules that were violated help the data steward research the root cause and develop a strategy for remediation.

While one of the proposed benefits of MDM is improved data quality, in reality it’s the other way around: To ensure a quality MDM deployment, establish best practices for proactive data quality assurance.

 

Integrating Identity Management into the Business Process Model

The previous phases – oversight, understanding and control – lay the groundwork of a necessary capability for MDM: entity identification and identity resolution. The increased inclusion of data sets from a variety of internal and external sources implies the increased variation of representations of master data entities such as customer, product, vendor or employee. As a result, organizations need high-quality, precise and accurate methods for parsing entity data and linking similar entity instances together.

Similarity scoring, algorithms for identity resolution and record linkage are mature techniques that have been refined over the years and are necessary for any MDM implementation. But the matching and linking techniques for identity resolution are just one part of the solution. When unique identification becomes part and parcel of the business process, team members become aware of how their commitment to maintaining high-quality master data adds value across the organization. Identity resolution methods need to be fully incorporated into the business processes that touch master entity data, implying the need for:

Enumerating the master data domains. It may seem obvious that customer and product are master data domains, but each organization – even within the same industry – may have numerous data domains that could be presumed to be “mastered.” Entity concepts that are used and shared by numerous organizations are candidate master domains. Use the data governance framework to work with representatives from across the corporation to agree on the master data domains.

Documenting business process models and workflows. Every business process must touch at least one master data entity. For an MDM program, it’s critical to understand the flow of business processes – and how those processes are mapped to specific applications. The organization must also know how to determine which applications touch master data entities.

CRUD (create, read, update, delete) characteristics and process touch points. Effective use of master data cuts horizontally across different business functions. Understanding how business processes create, read or update master data entity instances helps the data practitioner delineate expectations for key criteria for managing master data (such as consistency, currency and synchronization).

Data access services. Facilitating the delivery of unobstructed access to a consistent representation of shared information means standardizing the methods for access. Standard access methods are especially important when master data repositories are used as transaction hubs requiring the corresponding synchronization and transaction semantics. This suggests the need to develop a layer of master data services that can be coupled with existing strategies for enterprise data buses or data federation and virtualization fabrics.

“Master entity-aware” system development. If one of the root causes for the inadvertent replication of master data stems from siloed application development, the remedy is to ensure that developers use master data services as part of the system development life cycle. Couple the delivery of master data services with the proper training and oversight of application design and development.

The methods used for unique identification are necessary but not sufficient for MDM success. Having identified the business applications that touch master data entities is a prelude to exploring how the related business processes can be improved through greater visibility into the master data domains.