October 5, 2012 § Leave a comment
Yesterday, One of my colleague found out best and easiest way of Application Initialization and its worth putting into my blog.
“IIS is a demand-driven web server, i.e. IIS does things only when asked for…”
you can download it from Microsoft site http://www.iis.net/downloads/microsoft/application-initialization
This was the start of a blog post back in late 2009 announcing the beta for the IIS 7.5 Application Warm-Up module. The idea behind this module is to address a common customer need. Specifically, the demand driven nature of IIS is such that it doesn’t load application code until it’s needed to handle a request.
Many applications do a significant amount of work when they first start. And in many cases, the framework needed to run the application is loaded in the same way. Finally, it may be necessary for the application framework to compile some or all of the application code before it can handle that first request. All of this leads to an all too common situation where the unlucky client that makes the first request the application has to endure a long wait before they see the first response.
So back before IIS 7.5 was released, we asked ourselves what functionality would be needed to help to address this problem. The results of that thinking were two new features in IIS 7.5, one of them an administrative feature, and one of them a new interface in the IIS pipeline.
I’d like to talk about the pipeline change first.
The IIS pipeline is the heart of the IIS runtime – the part that determines how an IIS worker process responds to events, like the arrival of a request that needs to be served. The pipeline is a collection of event notifications and application programming interfaces (APIs) that modules can plug into and do work when things happen. The IIS product team uses these events an APIs to implement all of the interesting parts of what people think of as IIS in the form of modules. For example, IIS uses a module called StaticFileModule to be able to serve plain files. It uses a module called DefaultDocumentModule to know how to serve the default document for virtual directory or application. Modules are also used for things other than serving request. The WindowsAuthenticationModule implements Windows authentication, and the UriCacheModule implements certain caching that you don’t see, but improved the performance of the pipeline. If you are a programmer interested in the pipeline, one of the coolest decisions that we made for IIS 7.0 was to make the same interfaces that we use on the product team available to anyone. You can get started taking a look at it here.
So what does all of this have to do with warming up applications?
IIS 7.5 introduced a new event to the pipeline called GL_APPLICATION_PRELOAD. This event fires when the worker process is first starting up. A module plugged in here can register for the event to do work before the worker process notifies WAS that it is ready to handle requests. At the same time, we added a new pipeline interface that allows a module to create requests and drop them into the pipeline. These requests work just like requests from a client, except that there is no client, no network connection etc. Unless they specifically look for it, modules don’t see a difference between these “fake” requests and requests with a live client on the other end. Data that would normally be sent back to the client is discarded.
These two things together create the opportunity to solve part of the problem that I mentioned at the start of this post. When a worker process starts up, it can create a bunch of “fake” requests and send them through the pipeline. These requests will run through their respective applications without having to wait for a live client to create the demand to start.
So this looks good, but there is still a catch. IIS worker processes (which each host there own pipeline) are themselves demand started. Prior to IIS 7.5, the only way to start the worker process was for a client to make a request.
The second new IIS 7.5 feature makes it possible to automatically start an IIS worker process without waiting for a requests. This feature is pretty straightforward. To enable it, just go to the advanced properties for the application pool in Internet Information Services Manager, and set “Start Automatically” to “True”. Once you do this, worker processes for the application pool will be started up as soon as the IIS service starts. You can do the same thing by setting the startMode property in applicationhost.config to alwaysRunning.
So now, if you have a module that can send a list of requests through the pipeline, and you have the application pool set to auto start, all of the applications represented by the list of requests will be touched as soon as IIS starts.
The Application Warm-Up Module
The Application Warm-Up module that I first mentioned at the top of this post was to be the module to send the warmup requests. Why do I say this in the past tense? Back in March last year, we pulled the beta release and pointed the download page to a notice explaining that it was temporarily removed.
There were a couple of things that happened leading up to the removal. The first thing is that the functionality that I’ve listed above only solves a part of the problem. The remaining puzzle piece is that, even under the best of circumstances, there is still a period of time after starting the application where it cannot respond to client requests. If you need to restart the IIS service for any reason, even enabling Start Automatically does not help requests that arrive at the moment that the service starts. To address that, there needs to be a way for IIS to actually send a response during the warmup period. And there needs to be a way that an application can participate in what that response looks like. If we wanted to solve that problem, we needed to make a deeper investment in the module. And since we were fully engaged in the development of IIS 8, we were able to do just that as a part of the next major IIS release.
The other factor is that, when we looked at how the beta module worked, we realized that we would need to make some changes to the new pipeline functionality that we introduced in IIS7.5. Normally, when we introduce new APIs to IIS, we do so only when we are either going to use them ourselves or when we have a partner that is committed to using them before release. The pipeline changes for warmup were an exception to this because we didn’t have time to do the module before IIS 7.5 released. As sometimes happens when there is no code that depends on a new interface, we discovered that there were some things that would need to be fixed before Application Warm-Up could be made ship ready. This meant that, over and above the new functionality in the module, we would need to ship a QFE for IIS 7.5 (which is included in the setup package for Application Initialization).
Where are we now?
Finally, after almost a year after we pulled the beta, we were able to release the Release Candidate version of Application Initialization.
So that is the history up to this point for the Application Warm-Up/Initialization module. There are still questions that I’d like to answer:
– What is new in the RC?
– What’s the easiest way to start using it?
– What about advanced usage and troubleshooting?
– Why did the name change?
In my last post, I gave a bit of background on the Application Warm-Up module, now called Application Initialization. This week, I would like to go into more detail as to what the Application Initialization module does, and how you should think about using it.
As I mentioned earlier, the idea behind Application Initialization is that we want to provide a way that IIS can prepare an application to serve requests without having to wait for an actual client to make a request. With Application Initialization, we break the problem down into 3 parts:
- How can I start the worker process that hosts my application without waiting for a request?
- How can I get the worker process to load my application without waiting for a request?
- How can my application send some kind of response so that clients don’t see the browser hang until the application is ready?
I would like to address the first two questions here. The third question is a bit more complex and I will save it for my next post.
Starting a worker process without waiting for a request
This is something that’s not strictly speaking a part of Application Initialization in that we added this capability as a built-in feature of IIS, starting with IIS 7.5. I will go over it here because it works hand in hand with Application Initialization to make the application available as soon as possible after starting IIS.
This feature is controlled by a the startMode property for the application pool, described (along with other application pool properties)here. The default value for startMode is OnDemand, which means that IIS will not spin up any worker processes until needed to satisfy a client request. If you set it to alwaysRunning, IIS will ensure that a worker process is always running for the application pool. This means that IIS will spin up a worker process when the World Wide Web Service is started, and it will start a new worker process if the existing one is terminated.
Note that this property should not be confused with the autoStart property. Understanding autoStart requires a bit of background knowledge. Both application pools and worker processes can be started and stopped. If an application pool is started, it means that IIS will accept requests for URLs within the pool, but it does not necessarily mean that there are any worker processes started. If an application pool is stopped, IIS will return a “503 Service Unavailable” for any requests to the application pool and it will not start any worker processes. The autoStart property is essentially a flag that IIS uses to know which application pools should be started when the World Wide Web Service is started. When you stop an application pool in IIS Manager, autoStart is set to false. When you start an application pool, autoStart is set to true. In this way, IIS ensures that the same set of application pools are running after the World Wide Web Service is started and stopped (or through a machine reboot.)
Now let’s take a quick look at the configuration for an application pool that is set to be always available. This application pool will start when the World Wide Web Service starts and it will immediately spin up a worker process.
<system.applicationHost> <applicationPools> <add name="DefaultAppPool" autoStart="true" startMode="alwaysRunning" /> </applicationPools> </system.applicationHost>
With this configuration, the Default Application Pool will immediately spin up a worker process when IIS is started, and it will spin up a new worker process when the existing one exits.
With IIS 7.5, this property was not exposed in IIS Manager. It can be set by editing the applicationhost.config file directly or by one of IIS’s scripting or programming APIs, or by the Configuration Editor UI tool. In IIS 8, we have added the startMode property to the advanced properties page for the application pools UI.
How can I get the worker process to load my application without waiting for a request?
Now that you can see how to get IIS to spin up a worker process without waiting for a request, the next thing to address is how to get an application loaded within that worker process without waiting for a request. The Application Initialization module provides a solution here, and as above, it is controlled by a single configuration property.
The Application Initialization module extends the IIS configuration by adding a new property to the application settings called preloadEnabled (in IIS 8, this property is built-in.) Let’s take a look at what this looks like in the configuration where I’ve added a new application to the default web site and enabled it for preload:
<system.applicationHost> <sites> <site name="Default Web Site" id="1"> <application path="/"> <virtualDirectory path="/" physicalPath="%SystemDrive%\inetpub\wwwroot" /> </application> <application name="AppInit" applicationPool="DefaultAppPool" preloadEnabled="true"> <virtualDirectory path="/AppInit" physicalPath="c:\inetpub\wwwroot\appinit" /> </application> </site> </sites> </system.applicationHost>
Here’s how Application Initialization uses this property. When a new worker process spins up, Application Initialization will enumerate all of the applications that it will host and checks for this property. For any application where preloadEnabled=”true”, it will build a URL corresponding to the default page for the application and run it through the pipeline. This request does not go through the network, and there is no client listening for a response (IIS discards any data that would have gone to the client.)
This “fake” request accomplishes a few key things. First, it goes through the IIS pipeline and kicks off an application start event. This initializes a number of parts inside of IIS, and if the request is for ASP.NET, it will cause global.asax to run. It also reaches the application, which will see it is the first request after starting. Typically, I expect that applications will just handle this request just like any other request from a real client, but we do set some server variables into our “fake” request, so an application with awareness of this feature could implement special processing if it chose to do so.
There is another important aspect to this process. When IIS spins up a new worker process, there is two way communication betweenWAS and the new process. This allows WAS to know precisely when the worker process is ready to accept new requests. It also allows the worker process to get information from WAS as to whether it is going to be a new process to start taking requests, or whether it’s a replacement process to take over for an older process that’s being recycled.
This is an important distinction. In the case of a new worker process, we want to start taking client requests as soon as possible, which is the way that things work outside of Application Initialization. In the case of a replacement process, though, Application Initialization will prevent the new process from reporting itself ready for new requests, until all of the preload requests (and any warumup requests, which I will discuss later) have completed. This means that no client will ever have to wait for a process recycle to complete – because the old process will continue to take requests until the new one has completed all application initialization.
In my experience, many applications with a slow startup will do their work even for a simple request to the default page. For such applications, you can take advantage of improved application recycling simply by setting preloadEnabled=”true” for that application. Similar to the startMode property above, IIS 7.5 requires you to make this setting via direct edits or applicationhost.config, or via scripting or one of our config APIs, or via the Configuration Editor UI tool. In IIS 8, we have added “Enable Preload” as a checkbox in the UI for application settings.
The two topics that I’ve covered here should get you started with Application Initialization. The ability to handle worker process recycles has been a highly requested feature.
In my next post, I’ll tackle the topic of what it means to initialize an application and what things an application developer can do to make things responsive during the time everything is warming up. This is where we’ve made major changes and added a lot of stuff since the original beta release.
-from Wade Hilmo http://blogs.iis.net/wadeh/archive/2012/04/16/re-introducing-application-initialization.aspx
September 25, 2012 § 1 Comment
This article is in the Book Review chapter. Reviews are intended to provide you with information on books – both paid and free – that others consider useful and of value to developers. Read a good programming book? Write a review!
This review was first published by the author in the Sept 2012 issue of Software Developer’s Journal.
Object Oriented Reengineering Patterns, written by Serge Demeyer, Stéphane Ducasse, Oscar Nierstrasz is now out of print but is available free for download from http://scg.unibe.ch/download/oorp/. This book covers a topic that, as Martin Fowler points out in the Forward, software development from a clean slate is “…not the most common situation that people write code in.” Rather: “Most people have to make changes to an existing code base, even if it’s their own.”
This article is a review of the book, which I found contains gems of wisdom, and I have also taken the liberty to embellish the ideas presented in the book from my own personal experiences and to add what I feel is not given sufficient coverage.
The book is divided into four sections:
- Reverse Engineering
Each section has several subsections in which the reader will discover a topic called “Forces.” This is a very useful narrative of the factors that need to be considered within that section of engineering patterns. For example, forces might involve different stakeholder agendas, risk analysis, prioritization, cost, etc.
The “patterns” presented in the book are not what is typically encountered as coding patterns, such as “Abstract Factory”, “Singleton”, etc. Rather, the patterns identified are common problems. The majority of the book consists of:
- A problem statement
- A solution
- Known Uses
- Related Patterns / What’s Next
The consistency of this approach allows the reader to easily scan the book to fit his/her particular set of problems and focus on the suggested solutions, as well as engage in a discussion of the tradeoffs (Pros and Cons), the rationale (History) of the pattern, where the pattern has been used before (Known Uses) and to navigate to related patterns for further investigation.
Reverse Engineering, Reengineering, and Forward Engineering
In the Introduction, a key distinction is made between three different activities. I found that this distinction is very useful as it identifies activities of analysis, corrective rework, and new work. By separating tasks into these three categories, a more complete picture of the legacy system can be developed, one which then provides critical information to be applied to decisions such as budgets and schedules and required expertise.
Figure 1: Setting Direction (pg. 20)
The authors provide a concise statement for what is reverse engineering: “reverse engineering is essentially concerned with trying to understand a system and how it ticks.” This section of the book has several high-level topics:
- Setting Direction
- First Contact
- Initial Understanding
- Detailed Model Capture
each of which provides a group of patterns to aid in solving problems in that section.
I found the discussion of Forces in each section to be valuable – reverse engineering and reengineering requires a level of constant vigilance that are brought to consciousness reading through the Forces section. I would actually recommend that the Forces sections be re-read weekly by all team members and management at the beginning of any reverse / reengineering effort. There are also some omissions in the discussion, which I will address next.
What About the QA Folks?
One of the most valuable sources of information that I have found when working with legacy applications is talking with the Quality Assurance folks. These are people that know the nuances of the application, things that even the developers don’t know. While this might be inferred from the sub-section “Chat with the Maintainers”, the focus seems to be on the people maintaining the code–for example: “What was the easiest bug you had to fix during the last month?” This is, in my opinion, a significant omission of the book.
Regulations and Compliance Certification
One of the stumbling blocks I once encountered in a reengineering project was that the existing code had been certified to meet certain compliances. Re-certifying new code would be a costly and time consuming process. This raises an issue that should not be ignored but which unfortunately the book completely omits – are there third party certifications that the software must undergo approval before it can be reengineered or forward engineered? What are those certifications and how were they achieved in the past?
Reverse Engineering is About More Than Code
I found the book to be a bit too code-centric in this section. For example, under the section “Detail Model Capture”, the subsections:
- Tie Code and Questions
- Refactor to Understand
- Step Through the Execution
- Look for the Contracts
are all very code-centric. Several discussions seem to be lacking:
- Tools are available to aid in the reverse engineering process
- Reverse engineering the database
- Documenting user interface features
These are points that are critical to detail reverse engineering. Tools that generate class and schema diagrams and reverse engineering code into UML diagrams can be invaluable in a detail capture of the application. Over time, the user interface probably has all sorts of shortcuts and interesting behaviors that have been patched as users have made requests, and missing these behaviors will alienate the user from any new application.
I found that tools to support the documentation process could have been discussed. For example, I have set up in-house wikis for companies to provide a general repository for documenting legacy applications and providing a forum for discussion on many of the excellent points the book raises regarding communication and problem analysis.
Another tool I have often found lacking in companies maintaining legacy applications is source control (I kid you not.) Most legacy systems are still being updated, often with bug fixes, during the reverse / re-engineering effort. A source control system is critical as it developers can ensure that any new implementation mirrors the changes in the legacy system. It also provides a source of documentation – when a change is made to a legacy system, the developer can add comments that aid in the reverse engineering – why was the change made, what was discovered in making the change, how was the change made, and so forth.
Figure 2: Migration Strategies (pg. 182)
The authors define reengineering as “Reengineering, on the other hand, is concerned with restructuring a system, generally to fix some real or perceived problems, but more specifically in preparation for further development and extension.” This section of the book has five sub-sections:
- Tests: Your Life Insurance!
- Migration Strategies
- Detecting Duplicate Code
- Redistribute Responsibilities
- Transform Conditionals to Polymorphism
again, each of which provides a group of patterns to aid in solving problems in that section.
While reengineering definitely involves testing and data migration, again I found this section to be overly code-centric. With legacy applications, regarding the database I often encounter:
- non-normalized databases
- obsolete fields
- missing constraints (foreign keys, nullability)
- missing cascade operations, resulting in dead data
- fields with multiple meanings
- fields that no longer are used for the data that the field label describes
- repetitive fields, like “Alias”, “Alias1”, “Alias2”, etc., that were introduced because it was too expensive to create additional tables and support many-to-one relationship
Reengineering a database will break the legacy application but is absolutely required to move forward to supporting new features and requirements. Thus the pattern “Most Valuable First” (pg 29) in which it is stated:
“By concentrating first on a part of the system that is valuable to the client, you also maximize the commitment that you, your team members and your customers will have in the project. You furthermore increase your chances of having early positive results that demonstrate that the reengineering effort is worthwhile and necessary.”
can be very misleading. The part of the system that is valuable to the client often involves reengineering a badly designed / maintained database, and reengineering the database will take time – you will simply have to bite the bullet that early positive results are simply not achievable.
Lastly, the authors make a distinction between reengineering and new engineering work: “Forward Engineering is the traditional process of moving from high-level abstractions and logical, implementation-independent designs to the physical implementation of a system.” Forward engineering is the further development and extension of the application, once one has been adequately prepared by the reverse engineering and reengineering process.
The reader will notice that subsequent to the Introduction, there are two sections describing reverse engineering patterns and reengineering patterns, but there is no section describing forward engineering patterns. There is a very brief coverage of common coding patterns in the Appendix. Certainly there are enough books on forward engineering best practices, but in my experience, there is a significant step which I call “Supportive Engineering”, that often sits between reverse engineering and the reengineering / forward engineering process.
Figure 3: Bridging Legacy and Forward Engineered Products (author)
Reengineering of legacy applications often requires maintaining both old and new applications concurrently for a period of time. What I have termed “Support Engineering” are those pieces of code necessary to bridge data and processes while in this concurrent phase of product support. Depending on the scope of the legacy system, this phase may take several years! But it should be realized that basically all of the “bridge” code written will eventually be thrown away as the legacy application is replaced.
Commercial and In-House Tools
Supportive engineering also includes the use of commercial tools and the in-house development of tools that support the reverse and reengineering efforts. For example, there are a variety of unit test applications readily available, however the developers must engage in writing the specific unit tests (see the section “Tests: Your Life Insurance” in the book.) The legacy applications may utilize a database, and it will probably not be normalized, requiring tools to migrate data from the legacy database to a properly normalized database.
Concurrent Database Support
Furthermore, during the concurrent phase, it may be necessary to maintain both the re-engineered database and the legacy database. This doesn’t just involve migrating data (discussed in the book under the section “Make a Bridge to the New Town.) It may involve the active synchronization between legacy (often non-normalized) and reengineered (hopefully normalized) databases. Achieving this can itself be a significant development effort, especially as the legacy application is probably lacking the architecture to create notifications of data changes, most likely requiring some kludge (PL/SQL triggers, timed sync, etc.) to keep the normalized database in sync.
Another issue that comes up when having to maintain both legacy and reengineered databases is one of data compatibility. Perhaps the purpose of the reengineering, besides normalization, is to provide the user with more complicated relationships between data. Or perhaps what was a single record entry form now supports multiple records – for example, the legacy application might allow the user to enter a single alias for a person, while the new record management database allows the user to enter multiple aliases. During the concurrent phase, it becomes a significant issue to determine how to handle “data loss” when synchronizing the legacy system with data changes in the new system, simply from the fact that the legacy system does not support the same flexibility as the new, normalized database.
It Will All Be Thrown Away
Remember that the tools, tests, and software designed to bridge between reengineered and legacy applications during the concurrent support phase will become obsolete once the legacy application is completely phased out. The costs (time and money) should be a clearly understood and communicated to all stakeholders of the reengineering effort – management, developers, users, etc.
The book Object Oriented Reengineering Patterns offers some excellent food for thought. One of the most positive things about this book is that it will give you pause to think and hopefully to put together a realistic plan for reengineering and subsequently forward engineering a legacy application. For example, the advice in the section “Keep it Simple”:
“Flexibility is a double-edged sword. An important reengineering goal is to accommodate future change. But too much flexibility will make the new system so complex that you may actually impede future change.”
is useful as a constant reminder to developers, managers, and marketing folks. However, I think the book is too focused on the reengineering of code, leading to some gaps with regards to databases, documentation tools, and certification issues, to name a few. I don’t necessarily think that the concept of “reengineering patterns” was adequately presented – the book is more of a “leading thoughts” guide, and from that perspective, it has some very good value.
September 24, 2012 § Leave a comment
Siri is an intelligent personal assistant and knowledge navigator which works as an application for Apple’s iOS. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. Apple claims that the software adapts to the user’s individual preferences over time and personalizes results, and performing tasks such as finding recommendations for nearby restaurants, or getting directions.
Siri was originally introduced as an iOS application available in the App Store by Siri, Inc., which was acquired by Apple on April 28, 2010.Siri, Inc. had announced that their software would be available for BlackBerry and for Android-powered phones, but all development efforts for non-Apple platforms were cancelled after the acquisition by Apple.
Siri has been an integral part of iOS since iOS 5 and was first supported on the iPhone 4S. Siri is supported on the iPad (3rd generation), the new iPhone 5 and iPod Touch (5th generation) in September 2012 with the release of iOS 6. Siri is not supported on iPhone 4 and earlier versions, even in iOS 6.
September 19, 2012 § 4 Comments
That’s the initial idea of the PhoneSat project, a small satellite project run by a team of engineers at NASA’s Ames Research Center. The PhoneSat team has built and developed two small satellite models powered by Android smartphones, with the rest of the body constructed mostly of things you could buy at your nearest hardware store.
At a 2009 meeting of the International Space University’s Space Studies Program, a group of young engineering students had a big idea: Why not try to develop a cheap satellite-control system using mostly off-the-shelf consumer technology items? They wanted to see if they could create something that could survive in space using existing technology, rather than spending resources to invest in building items from the ground up.
The idea is relatively simple. “Here’s what you have in your pocket, and it can fly in space,” says Oriol Tintore, a software and mechanical engineer for NASA’s PhoneSat project.
After we reported on these microsatellites earlier this summer, the PhoneSat team invited us to visit their lab at NASA’s Ames Research center in Moffett Field, California, in the heart of Silicon Valley. They showed off the two current PhoneSat models, PhoneSat 1.0 and PhoneSat 2.0, and further explained their upcoming mission, now scheduled for November 11, 2012.
PhoneSat: Meet the team, and the units
At a 2009 meeting of the International Space University’s Space Studies Program, a group of young engineering students had a big idea: Why not try to develop a cheap satellite-control system using mostly off-the-shelf consumer technology items? They wanted to see if they could create something that could survive in space using existing technology, rather than spending resources to invest in building items from the ground up. This big idea put the PhoneSat project into motion.
Today, team PhoneSat is a small group of about ten engineers, led by Small Spacecraft Technology program manager Bruce Yost and PhoneSat project manager Jim Cockrell.
“[PhoneSat] gives young engineers a chance to work on something that’s actually going to fly in space early on in their careers,” said Cockrell.
Although the project idea emerged in 2009, the building, designing, and testing of the parts that would make up the PhoneSat 1.0 model began in 2010. To power the satellite units, the PhoneSat team turned to small phones with large processors.
“This phone has a very powerful processor of 2.4MHz, which is more powerful than most processors out there in space,” says Jasper Wolfe, who handles altitude control for the project. “It has just about everything we need, so why not use it? It’s a few hundred dollars compared to tens of thousands of dollars.”
The units themselves are small—compact enough to fit in your hand. At 10cm by 10cm by 10cm, each device is barely larger than a coffee cup. And the cost of each of these units is relatively cheap: PhoneSat 1.0 costs roughly $3500, while PhoneSat 2.0 is about $7800 due to its more advanced hardware.
PhoneSat 1.0 houses a Google Nexus One smartphone, which runs a single Android application that the team developed themselves. All of the Nexus’s phone capabilities have been disabled—Wolfe jokes that they have to set the phones to “airplane mode” before launch—and instead the device relies on the PhoneSat app for communication and data recording.
The PhoneSat team was initially attracted to the Nexus One because, at the time, it was one of the best smartphones available. Plus, they liked the open-source nature of developing for the Android platform.
“We talked about whether we should use an Android phone versus something else, like an iPhone, and the consensus was that the iPhone was a great phone, but an Android phone was a great satellite,” says Tintore, the team’s mechanical and software engineer.
Besides the Nexus One, the main pieces of the satellite include external batteries and an external radio beacon. A watchdog circuit will monitor the system and reboot the Nexus if necessary.
The team plans to evolve the satellites as technology evolves, which is why PhoneSat 2.0 uses a Samsung Nexus S instead of a Nexus One. In fact, the Nexus S has an added gyroscope already built in, which has been “extraordinarily helpful” in building a next-gen satellite, according to Wolfe. “It’s just a tiny little chip, but very useful,” he says. The gyroscope helps the phone measure and maintain orientation, so it assists with navigation as well as with the motion and rotation of the phone itself.
PhoneSat 2.0’s design includes a two-way S-band radio, solar arrays for unlimited battery regeneration (“Well, for as long as there is sun,” says Yost), and a GPS receiver. The radio will command the satellite from the ground, while the solar panels will enable the unit to embark on a mission with a long duration. Also built into the PhoneSat 2.0 design are magnetorquer coils (electromagnets that interact with Earth’s magnetic field) and reaction wheels to control the unit’s orientation in space.
Each model has been tested in environments that closely resemble what they could encounter while in space. Two Nexus One phones were launched on smaller rockets in 2010 as a preliminary test of how the phones will handle high speeds and high altitude. One rocket crashed and destroyed the smartphone; the other landed with the Nexus One perfectly intact. Both PhoneSat 1.0 and 2.0 models have also been tested in a thermal-vacuum chamber, on vibration and shock tables, and on high-altitude balloons, all with great success.
PhoneSat’s first mission: survival
Three of the PhoneSat units—two PhoneSat 1.0 models and one PhoneSat 2.0—are gearing up for their first mission. They’ll be hitching a ride on the first test flight of the orbital Antares rocket, which is scheduled to launch on November 11. Because the satellites are “hitchhikers,” according to the team, their status is entirely dependent upon when the Antares is ready to fly. The Antares’s first test flight was originally supposed to occur in August of this year, but that was delayed due to complications with the launch facility.
The delay of the launch was somewhat beneficial to the PhoneSat team, because it gave them more time to improve PhoneSat 1.0 and test PhoneSat 2.0. Originally, only PhoneSat 1.0 models were scheduled to fly on the Antares, but the delay allowed for a PhoneSat 2.0 model to go for a ride as well.
Each PhoneSat unit has its own role in the mission, which is more of a tech demonstration to show off what the team has accomplished with these microsatellites. The team jokes that PhoneSat 1.0 has a simple Sputnik-like goal of broadcasting status data back to Earth and taking photos. PhoneSat 2.0 will test the subsystems on the satellites themselves—features such as desaturation, location control, and the power-regeneration system.
The satellites will be in orbit for about 10 to 14 days before they reenter our atmosphere. According to Cockrell, mission duration isn’t limited by battery life and power, bur rather by atmospheric drag: The higher the satellites go, the longer they will stay in orbit. Although the PhoneSat 1.0’s batteries are expected to run out after ten days, PhoneSat 2.0 is solar powered, so it could, in theory, live longer. But it will still come back at around the same time as the two PhoneSat 1.0 models will, because they all have roughly the same mass.
The future of PhoneSat
The team has plenty of ideas on things they’d like to try in the next PhoneSat rebuilds. Because their projects aren’t specific missions and are more about tech demonstrations, the team can always develop further by adding new subsystems. They definitely envision building a PhoneSat 3.0, a 4.0, and even more, because the team views the project as never being fully complete.
“We don’t have a defined level of when it’s completed,” Wolfe explains. “[It’s more about] however much you can make in a bit of time, how much can you get out of three months, then six months, and so on.”
According to Alberto Guillen Salas, the team’s hardware engineer, using materials that are mostly already built and ready to go gives the team the ability to develop satellites in a very short time. That way, they can test the units often to keep improving them.
As for smartphone models to try next, Wolfe suggested the Samsung Nexus Galaxy “because of the name” and to stick with using phones from the Google family. The team would also like to use a smartphone with a high-resolution camera to capture good quality photos from space.
PhoneSat’s next expected launch will be in the summer of 2013, when the team will be advancing the development of the PhoneSat 2.0 unit. The primary focus of this next tech demo is to push the 2.0 system and see what the group can do with it. Plus, the team can use information gathered from the first PhoneSat 2.0 launch this year to make improvements on the model. Radiation testing is extremely difficult to perform, so Cockrell anticipates that the team may have to update PhoneSat 2.0’s design to protect it from radiation for future launches. Only one unit will participate in the 2013 launch.
The part of the PhoneSat project that really excites the team is the possibility of involving the public in future PhoneSat developments, mainly through Android application development. The group would like to open the project up to allow people to write apps for the PhoneSats, and then send the units into space. The PhoneSat project has gathered interest through its public appearance at Maker Faire, through Random Hacks of Kindness, and through the International Space Apps Challenge.
“We’re getting a whole new crowd of people involved in space, people that didn’t have the money to get involved before,” says Wolfe, “though the leveraging of the open-source community around Android is also opening up a whole new market of people who want to get involved in space.”
(Source : http://www.techhive.com)
September 11, 2012 § Leave a comment
My last blog I mention the advantage of PoC in Enterprise Architecture and in the end I mention the negative part it. I thought to cover this part. We all know there is always pros and cons. Its suits someone or it won’t. That’s why commonsense matters. And chose whats right for you. Here is article on Enterprise Architecture Anti Patterns: Proved No Concept
When Concepts are as clear as the The Elephant on Acid
Anti Pattern Name: [Proved No Concept]
Type: [Management, Technical]
Problem: [Proof of Concept usually started in a hurry without a clear definition of purpose and agreed specification of the actual ‘concept to prove’. These end in acrimony when no concept is actually validated as the fundamental objective was not clear from the outset. Quite often they become tenuous ‘proof of technologies’ or really more orientation projects with technologies being trialled.]
Context: [Poor specification of requirements for the Proof of Concept is the main culprit. Over exuberance and lack of planning, ill-defined concepts, or ‘make it up as we go along’ behaviours all act as amplifiers.]
Forces: [lack of governance, poor scope definition, no real understanding of the concept to prove at outset, the Proof of Concept is often really about finding and defining the concept to prove.]
Resulting Context: [Inconclusive outcomes, project overrun, false starts, confusion, weak hypotheses, badly designed research vehicles.]
Solution(s): [Resist pressure to commence a Proof of Concept without a well-articulated and signed off specification of the concept, its scope and how success (or otherwise) will be determined. If the concept is very complex or elusive, split the Proof of Concept into multiple phases with definition and agreement / candidate selection being the first stage(s). A Proof of Concept (PoC) that proves OR disproves the validity of the concept is a successful PoC. One that fails to reach any meaningful conclusion due to confusion over the concept being proved or disproved is a failure.]
Source : http://stevenimmons.org/2011/12/enterprise-architecture-anti-patterns-proved-no-concept/
September 10, 2012 § Leave a comment
The Value of the PoC in Enterprise Architecture
With appropriate planning, management, and presentation a Proof-of-Concept can become a key part of a successful Enterprise Architecture
by Scott Nelson
Often times, Enterprise Architecture is very similar to the old story of the blind men and the elephant. The tale varies greatly in the telling, with the consistent part being that they all exam the elephant at the same time, yet each examines only part of the whole animal. When they discuss what they have examined, they all have completely different perspectives.
Even if only implied, all Enterprise Architecture (EA) frameworks include the notion of viewpoints. That is, we all agree that an Enterprise Architecture consists of things, and that those things can have different meanings, degrees of importance, immediacy of value, and even levels of aesthetic appeal to different people. Enter the Proof-of-Concept (PoC). The goal of a PoC is to serve as the remedy for the confusion in that old tale of the blind men and the elephant. Before the PoC, each stakeholder has a different view of the Enterprise Solution. A successful PoC does not need to change anyone’s point of view, it only needs to demonstrate to everyone’s satisfaction that it will fit the picture as they see it.
Why Do a Proof-of-Concept in Enterprise Architecture?
The value of a PoC is its ability to reduce risk. At the level of detail generally applicable to Enterprise Architecture, everything can work. It is easy to say that a portal will provide appropriately credentialed users with access to all internal applications, enforcing user permissions seamlessly through the use of an enterprise identity and access management package. It is almost as easy for the solution architects to take the logical architecture and create a physical architecture showing exactly how specific vendor packages and enterprise libraries will wire together to realize the vision of this enterprise portal.
However, the perception of enterprise architecture can be badly damaged when the actual implementation of this architecture fails to meet cost, time, or usability expectations. Building a small version of the planned solution before making a large resource commitment and business dependency on the outcome can demonstrate the value of Enterprise Architecture and greatly reduce the risk of wasted resources and lost opportunities.
Good Reasons to Do a PoC for EA
The portal scenario described previously was purposely both common and medium-complex. A PoC can be valuable for something very simple, such as testing vendors’ “ease of integration” claims that two products can work together — something that quite often is true only given a long list of limitations that are not always as easily discovered as the initial claim.
A proof-of-concept effort around a very complex solution is not only a good idea; some frameworks consider it mandatory. A popular notion in Enterprise Architecture discussions of late is that EA is about managing complexity. While EA is about much more than that, successful EA should result in managed complexity, whether or not it is a stated outcome. Conducting a PoC of complex systems is a good first step in managing complexity.
A good rule of thumb is, if you expect higher than trivial consequences when an architecture solution building block stops working, that solution deserves some level of PoC.
Bad Reasons to Do a PoC for EA
Just as a PoC to verify that two products work together as claimed is a good idea, testing whether two products from the same vendor using a standard interface will work together when you have a good support agreement in place is a waste of resources. Not because the vendor’s claims will always be 100% valid, but because the pieces are already in place to correct any issues. The project plan should simply include the normal amount of slack time to cover the inevitable unknowns that will occur during an implementation.
It is also a bad idea to conduct a PoC of something that has to be done. An example is an upgrade or migration dictated by compliance requirements. In this case, because the delivery team knows they are going to have to “do it for real” after a PoC they will generally use a throw-away approach, making the effort nothing but wasted overhead and delay.
The value of a Proof-of-Concept is the mitigation of risk (a core value of Enterprise Architecture according to many frameworks, and it’s just plain common sense). If the risk is minimal, the investment in mitigation should be proportional.
In an EA PoC, Aim First, Fire After
So, if a PoC should be conducted to mitigate risk, there needs to be a clear understanding of the following:
- The risk that needs to be mitigated
- The consequences of failing to mitigate it
- What defines a successful mitigation of the risk
If any of those three are unknown, do not start work on the PoC until you understand the proof, the concept, and the reasons for both.
When attempting to complete the PoC quickly, buy as much time as possible to prove the concept thoroughly. Many “PoCs” are in production today, and the reason why maintenance costs keep going up despite improved processes is that while the processes are followed to the letter, the spirit of the underlying concept(s) are completely forgotten (or unknown).
Continuous Involvement, Consistent Messaging
So, how do you make a PoC as thorough as possible when there is inevitably pressure to succeed (though one important reason to do a PoC is to discover if it can fail)? And when any level of success can be misconstrued to be total success? First, show progress early. To business stakeholders, a PoC is doing something that is not making or saving money, it is only spending money. These stakeholders want, need, and deserve to know that their IT investment is being managed wisely. The difference between a successful and celebrated EA group and a struggling and mistrusted one is how stakeholders perceive their value. Structure the PoC effort to create demonstrable progress as early as possible.
For example, making progress early in a portal PoC would mean having the basic UI elements in place as quickly as possible, even if there is no real data behind them. For an infrastructure PoC, have cells marked “complete” in a project plan. No matter what the evidence is, make sure that it is easily recognizable to key stakeholders as early as possible.
The danger of showing progress early, however, is that the degree of progress can easily be misinterpreted by those who aren’t technically deep (i.e., those who are paying for the PoC). As some frameworks do, this is an important point in the process to mitigate the risk of presenting a premature PoC proof without a validated concept. Always follow the validation of progress with an immediate reminder of the goal that is being pursued and time/effort/dollar allotment that was agreed upon to reach that goal.
It also helps increase buy-in when presenting such reminders if you can claim that you haven’t used all committed resources to reaching the goal. Try to get there early, under budget, or both when possible. Just don’t do it too often or it may damage your credibility.
The tale of the blind men and the elephant has many different endings. In some they never come to consensus; in others they discover the elephant by combining their understandings. In Enterprise Architecture, the happy ending is when, after having concluded that all of their inputs fit together no matter how different, the blind men all climb on top of the elephant and ride comfortably in the same direction.
The thing about Enterprise Architecture is that it is based on common sense. Although Voltaire is credited with saying that it isn’t so common, those who possess common sense often seem to get the most from being reminded of things that are.
Finally, for a view from the negative side of this concept, there is an interesting article at stevenimmons.org.
September 4, 2012 § Leave a comment
Big Data something I have an eye on very soon I am gonna go in depth as I get Time. Everyone knows why we should know big data. I will definitely put more post later
In information technology, big data is a collection of data sets so large and complex that it becomes awkward to work with using on-hand database management tools. Difficulties include capture, storage, search, sharing, analysis, and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to “spot business trends, determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions.
Though a moving target, as of 2008 limits were on the order of petabytes to exabytes of data. Scientists regularly encounter limitations due to large data sets in many areas, including meteorology, genomics, connectomics, complex physics simulations, and biological and environmental research. The limitations also affect Internet search, finance and business informatics. Data sets grow in size in part because they are increasingly being gathered by ubiquitous information-sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, radio-frequency identification readers, and wireless sensor networks. The world’s technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; as of 2012, every day 2.5 quintillion (2.5×1018) bytes of data were created.
Big data is difficult to work with using relational databases and desktop statistics and visualization packages, requiring instead “massively parallel software running on tens, hundreds, or even thousands of servers”. What is considered “big data” varies depending on the capabilities of the organization managing the set. “For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration.”
Here is the presentation I saw in infoq, So I thought I would share with you guys