Network Sniffer Spreading in Banking Networks

In this year number of malware attack on banking networks almost doubled compared to previous year. Also, malware authors are adopting more sophisticated techniques in an effort to target as many victims as they can.
There were only trojans which steal steal user’s credential by infecting user’s devices. But recently, security researchers from the Anti-virus firm Trend Micro have discovered a new variant of banking malware that not only steal the users’ information from the device it has infected but, has ability to “sniff” network activity to steal sensitive information of other network users as well.
The banking malware, variant of EMOTET spreads rapidly through spammed emails that which pretend itself as a bank documentation. The spammed email comes along with a link that users easily click, considering that the emails refer to financial transactions.
Once clicked, the malware get installed into users’ system that further downloads its component files, including a configuration and .DLL file. The configuration files contains information about the banks targeted by the malware, whereas the .DLL file is responsible for intercepting and logging outgoing network traffic.
Untitled
The .DLL file is injected to all processes of the system, including web browser and then “this malicious DLL compares the accessed site with the strings contained in the previously downloaded configuration file, wrote Joie Salvio, security researcher at Trend Micro. “If strings match, the malware assembles the information by getting the URL accessed and the data sent.” Meanwhile, the malware stores stolen data in the separate entries after been encrypted, which means the malware can steal and save any information the attacker wants.
The malware also capable to bypass the secure HTTPS protocol and users will feel free to continue their online banking without even realizing that their information is being stolen.
EMOTET login
some Network APIs hooked by the malware.
PR_OpenTcpSocket
PR_Write
PR_Close PR_GetNameForIndentity
Closesocket
Connect
Send
WsaSend”
The malware infection is not targeted to any specific region or country but, EMOTET malware family is largely infecting the users of EMEA region, i.e. Europe, the Middle East and Africa, with Germany on the top of the affected countries.

Service Oriented Enterprise (SOE)

 

 

SOE is the architectural design of the business processes themselves to accentuate the use of an SOA infrastructure, especially emphasizing SaaS proliferation and increase use of automation where appropriate within those processes.

The SOE model would be the enterprise business process model which should be then traced to the other traditional UML models. Both sets of models are within the realm of management by the Enterprise Architects. However the audience focus of SOE is to bring technological solutions deeper into the day to day planning of the business side of the enterprise, making the Enterprise Architects more active in those decisions.

It allows business to use the same analysis and design processes that we have been using to design and develop software using MDE, but to make business decisions. The Enterprise Architects become the facilitators of moving the enterprise to SOE.

It requires the Enterprise Architects to actively stay aware of the ever changing state of technological solutions and project the possible impacts on the Enterprise operations if deployed, bringing in SME’s as necessary to augment the discussions.

Linux web server: Nginx vs. Apache

The rise in popularity of nginx and the steady decline of Apache in the web server market has delivered new options for new deployments.  Recently larger scale server setup ended up choosing nginx for the job – but should you?

 

Event driven design of Nginx gave the edge over Apache’s process driven design, which can make better use of today’s computer hardware. Nginx perform extreamly well at serving static content, it can do it more efficiently than Apache can.

But in the Linux world Apache’s mature and capable platformhas universal support. Things that ‘just work’ out of the box with Apache may need additional research and configuration under nginx. Control panels and automatic configuration tools may not be available for nginx yet. Your staff might be a lot more familiar with Apache and much more capable of diagnosing issues. Those benefits can not be underestimated. The performance gains of nginx are negligible for the vast majority of scenarios out there.

Be carefully ! when you weigh your options , If you’re setting up a hosting server or a critical business application. Trying to force everything into nginx because you heard it will be drastically faster could be a mistake. I assume best strategy is formed by a combination of technologies rather than a simple reliance on a web server platform.

There are performance gains to be had by using nginx if you cache your site , but it comes as the expense of some out-of-the-box compatibility and a potential learning curve. If you’re running a PHP application, you’ll see bigger gains by using an opcode cache than switching web servers.

The ‘vanilla’ build of NGinx uses a simple cache (by the way, it’s worth configuring a Ramdisk or tmpfs as your cache-directory, the performance payoff can be huge)

There is a module you can include at compile time that will allow you to trigger a cache-flush. An alternative option is to simply clear all files (but not directories) from the caching area. It works quite nicely in general though, you can configure to bypass cache if the client includes a certain header, you can override the origin’s cache-control as well.

Also, worth noting that memcached isn’t a good/efficient fit for some deployments. Take a website built on a CMS that supports scheduled publishing (so lets say Joomla). When querying the db for a list of articles, you might run “select * from #_content where publish_up < ‘2014-06-07 15:10:11′”.

A second later, the query will be different (though the results will likely be identical). Not only will you not be able to use a cached result, but you’ll waste cycles caching a result set for a query that will never ever be run again.

Whether you need to worry about that obviously depends on the content you’re querying. For most sites it’s probably not a drama, but if the table #_content happens to be huge then it’s potentially a problem (especially as the actual query is somewhat more complex than my example). With NGinx’s caching, you’d obviously be caching the resulting HTML page and so wouldn’t need to worry about this (though if you’re using scheduled de-publishing, you’d want to be careful).

 


Obviously the above is assuming you’re using memcached at the DB level rather than for the overall output – again it’s kind of deployment dependant