SOE is the architectural design of the business processes themselves to accentuate the use of an SOA infrastructure, especially emphasizing SaaS proliferation and increase use of automation where appropriate within those processes.
The SOE model would be the enterprise business process model which should be then traced to the other traditional UML models. Both sets of models are within the realm of management by the Enterprise Architects. However the audience focus of SOE is to bring technological solutions deeper into the day to day planning of the business side of the enterprise, making the Enterprise Architects more active in those decisions.
It allows business to use the same analysis and design processes that we have been using to design and develop software using MDE, but to make business decisions. The Enterprise Architects become the facilitators of moving the enterprise to SOE.
It requires the Enterprise Architects to actively stay aware of the ever changing state of technological solutions and project the possible impacts on the Enterprise operations if deployed, bringing in SME’s as necessary to augment the discussions.
The rise in popularity of nginx and the steady decline of Apache in the web server market has delivered new options for new deployments. Recently larger scale server setup ended up choosing nginx for the job – but should you?
Event driven design of Nginx gave the edge over Apache’s process driven design, which can make better use of today’s computer hardware. Nginx perform extreamly well at serving static content, it can do it more efficiently than Apache can.
But in the Linux world Apache’s mature and capable platformhas universal support. Things that ‘just work’ out of the box with Apache may need additional research and configuration under nginx. Control panels and automatic configuration tools may not be available for nginx yet. Your staff might be a lot more familiar with Apache and much more capable of diagnosing issues. Those benefits can not be underestimated. The performance gains of nginx are negligible for the vast majority of scenarios out there.
Be carefully ! when you weigh your options , If you’re setting up a hosting server or a critical business application. Trying to force everything into nginx because you heard it will be drastically faster could be a mistake. I assume best strategy is formed by a combination of technologies rather than a simple reliance on a web server platform.
There are performance gains to be had by using nginx if you cache your site , but it comes as the expense of some out-of-the-box compatibility and a potential learning curve. If you’re running a PHP application, you’ll see bigger gains by using an opcode cache than switching web servers.
The ‘vanilla’ build of NGinx uses a simple cache (by the way, it’s worth configuring a Ramdisk or tmpfs as your cache-directory, the performance payoff can be huge)
There is a module you can include at compile time that will allow you to trigger a cache-flush. An alternative option is to simply clear all files (but not directories) from the caching area. It works quite nicely in general though, you can configure to bypass cache if the client includes a certain header, you can override the origin’s cache-control as well.
Also, worth noting that memcached isn’t a good/efficient fit for some deployments. Take a website built on a CMS that supports scheduled publishing (so lets say Joomla). When querying the db for a list of articles, you might run “select * from #_content where publish_up < ‘2014-06-07 15:10:11′”.
A second later, the query will be different (though the results will likely be identical). Not only will you not be able to use a cached result, but you’ll waste cycles caching a result set for a query that will never ever be run again.
Whether you need to worry about that obviously depends on the content you’re querying. For most sites it’s probably not a drama, but if the table #_content happens to be huge then it’s potentially a problem (especially as the actual query is somewhat more complex than my example). With NGinx’s caching, you’d obviously be caching the resulting HTML page and so wouldn’t need to worry about this (though if you’re using scheduled de-publishing, you’d want to be careful).
Obviously the above is assuming you’re using memcached at the DB level rather than for the overall output – again it’s kind of deployment dependant