My initiation to the IBM Internet of Things

I recently moved to the IBM Internet of Things division, and needed to learn more about the IBM Watson IoT Platform. There is a lot of hype and hyperbole around IoT, and the amount of information available — even just on IBM’s offerings — can be overwhelming and confusing, especially when it assumes what you already know (and you don’t know!).

For others who may be in the same boat, here’s a layperson’s introduction to IBM’s IoT Foundation offering, the underpinning of our IoT Platform.

You probably know that IoT solutions collect data from “things” (sensors or devices) and analyze that data to make decisions or take actions. As more “things” become instrumented and combined with data from other sources, advanced analytics, and cognitive systems, IoT solutions get very interesting, like the self-driving car that you’ve likely heard about.

Very cool, but also sounds very complex. How does this actually work?

Full disclosure: I’m not a programmer, although I can understand code; learning new languages and piecing together programs is not my idea of fun. So my goal was to understand how this all works without having to write code. [If you like to code and prefer to get your hands dirty, you might prefer to start with Exploring IBM Watson Internet of Things, or in the IBM Bluemix IoT Quickstart environment, where you can experiment in a sandbox.]

The IoT Foundation (IoTF) service enables communication between the devices that generate the data and the applications that want to interact with that data or with the devices themselves.  Here is an excellent overview of the IoTF, courtesy of the very helpful IoT Foundation documentation:iotf-overview

The IoT Foundation is available on IBM Bluemix as a hosted service, and recently became available to run on your own data center as a managed service (details here).

When you set up your IoTF service, you get an “organization id”, that identifies and groups your devices and applications. This value is used in the connection and authentication process to generate tokens for the devices and applications to use for security. Connections can also be encrypted.

Then you register your devices with IoTF. You can use the IoTF browser-based dashboard to add them manually, or write application code to manage the registration using REST APIs or one of several programming languages (libraries are provided to help). You can register devices individually or in bulk, and even set things up so devices can register themselves. Devices can then transmit data to IoTF, usually using JSON messages over MQTT, a lightweight messaging protocol.

You also connect your applications to IoTF, using the organization id and an API key and token generated from your IoTF instance. The applications are how you make use of the data, whether that’s applying analytics, invoking actions based on triggers, or what-have-you.

Using IoTF, applications can subscribe to data “events” from devices, or send commands to the devices. IoTF provides device management commands to reboot, reset, and manage device firmware – which you can also issue from the IoTF dashboard – presuming the target device has the capability to respond. (Of course, someone did have to program that device in the first place.)

To see this in action, I highly recommend the IoTPhone sample application on IBM Bluemix, as described in this video and documented here. In the sample, you register your smartphone with IoTF and then view the data from its sensors (accelerometer and GPS) coming into the IoTF service. No programming required, although you can also view and modify the sample application code for registering and connecting your device.

There is a second part of the sample that shows how to use that device data with IBM Real-Time Insights, another IoT Platform service on Bluemix that provides analytics and rules. So for example, you can trigger an email or other action based on the data values. I’ll leave details on that for a future post.

I hope this intro to IoT Foundation helped someone besides myself with IoT on-ramping. Happy exploring!

New: Configuration Management in Rational Collaborative Lifecycle Management 6.0

In case you haven’t heard: Rational Collaborative Lifecycle Management (CLM) v6.0 introduces new support for configuration management, to better enable strategic reuse, change management, and product-line engineering.

So what IS configuration management?  You may be familiar with source code configuration management (SCM), widely used by software programmers.  Artifacts are versioned in the context of “streams”, so you can include different versions of the same artifact in multiple streams — perhaps to support changes for maintenance on an initial release, while allowing separate parallel changes to those artfiacts for a new release. Changes can be delivered across streams, for example to add fixes from the maintenance stream to the new release. Multiple streams or configurations enable artifact reuse (unchanged artifacts are referenced, not copied) and isolate changes. You can also take baselines to capture the state of your artifacts at significant milestones, like a release date.

In v6, CLM extends configuration management to include requirement, test, and design artifacts. You can define streams of artifacts in each of DOORS Next Generation, Quality Manager, and Design Manager (and of course, Rational Team Concert’s SCM component); modify or include different artifacts or versions of artifacts in the context of each stream; and take baselines to reflect the status at a given point in  time.

Not only can you do this within the individual application domain, you can also define “global configurations” that bring together streams from the different domains — including artifacts from requirement, test, design, and/or code domains.  And you can take global baselines of these configurations, capturing a point-in-time for all the artifacts associated with your release or product line.

Why should you care?  (If you’re already doing product-line engineering, the answer might be obvious to you.)

Consider: how often might you use test artifacts from a given release in a subsequent release? Do you ever have a requirement that applies to more than one release, maybe with modifications?  Maybe you want to manage change against requirements, isolating revisions into separate streams or “change sets” while teams continue to use a primary stream.  Perhaps you simply want to capture a complete picture of the artifacts that went into a particular release.

For a great overview of configuration management, check out this YouTube playlist.

And look for more posts about the new CLM capabilities on, IBM developerWorks, and right here.

a herd of “aaS”s

Read about cloud computing, and you tend to see “aaS” quite a bit.  As you probably know, it stands for “as a service”; cloud computing is all about providing self-service and managed operations to clients, who pay for what they use.  The typical prefixes for “aaS” are I (for Infrastructure), P (for Platform), and S (for Software).  And the difference is in what the cloud service provider manages, vs what the cloud client takes care of.

Let’s start from the top down: “Software as a Service”.  The cloud provider pretty much manages the entire software stack from soup to nuts (or hardware up to the application itself). The client just logs in and uses an instance of the application(s) that they need.  Like WordPress – you just sign in and can start writing a blog. Or Google Docs.  (Of course, as a web user, we don’t really know how something is hosted on the back-end, but just like cloud, we don’t have to care!)  In the SaaS model, the client doesn’t install anything, they just log in and go. They may be able to customize some options of the application, depending.

The next step down is “Platform as a Service”.  The cloud provider takes care of everything up to the runtime environment – including middleware (web application servers, database servers, etc) and operating system options. The client decides and manages the actual applications, code, and databases that get deployed into that environment.  So more work for the client, but also more control over the software that they use.

Then there’s “Infrastructure as a Service”.  The cloud provider manages the hardware (storage, servers), networking, and the virtualization services that make a cloud a cloud. The client takes care of operating systems, middleware and runtime, in addition to the applications.  This offers the greatest flexibility to the client.

While these seem like well-defined chunks, I suspect the reality is a little less cut and dried.  Could you have PaaS where the customer provides some of the middleware? Or where the IaaS includes operating systems?  Pretty sure those scenarios could be worked out with many cloud providers.

There are some great diagrams of the “aaS”s floating around. I didn’t want to plagiarize by copying one here.  I also found some references to “Business as a Service” (BaaS) – where the software provided has business processes and intelligence already built in. I wonder how many more “aaS”s will be defined as cloud computing further matures?  (hmm. did that sound wrong?)

The net net is that the cloud model allows multiple levels of service provision for clients – from the very basic virtualized environment to the entire software stack. It’s up to the client to decide how much they want their provider to manage for them, and how much they want to take on and control themselves.

head in the Cloud…

Lately, I’ve been learning about cloud solutions, including IBM’s own Bluemix offering.  I confess to initially finding the whole thing quite nebulous (haha) and complex.  It reminds me of the old mainframe days, when one used a dumb terminal to access apps and data on the mainframe, which unseen operators managed in the background (they even had virtual partitions!).  Anyways, I figure I’m not alone in being mystified by all the terminology and technologies surrounding cloud, so I thought I’d share some of what I’ve learned.

First, what the heck IS a cloud?  It’s basically a managed IT operating environment, that can include systems and applications. Clients pay only for what they use, as they use it. The environment can grow/shrink with usage needs, can requisition or reinquish storage or CPUs, etc. For example, you might host your website on a cloud; at low-access times, maybe you have only one server running, but during peak times, the cloud adds additional servers and balances the load across them – automagically (to the client, at least).  Typically the cloud also has self-service capabilities, so the client can set up what they need, service levels, etc, without having to wait for someone to help them.

The cloud environment uses virtualization to separate “workloads” (i.e. the running systems or applications) from each other and ensure everyone gets their privacy and their fair share of resources. All kinds of different workloads to run simultaneously. (To understand the virtualization bit, it helped me to think about a VMware server, hosting and running many virtual images at once, each with different operating systems and software, just sharing the machine hardware.)

For clients, it can lower costs (think hardware, ops/maintenance, training…) and reduce worry (is our system big enough, too big, adequate failover…).  The cloud is “always on” and up to the task — assuming you pick a good cloud provider of course!

A cloud can be public, private, or hybrid, and on-premise or off-premise.  A private cloud is for the use of a single client, while a public cloud is shared by multiple clients (also called mulit-tenant).  A hybrid cloud is a combination, or either one combined with other client on-premise systems.  On-premise means located at the client’s site (typically private), while off-premise is somewhere else (any type of cloud can be off-premise).  Location is often based on things like legal or regulatory requirements about where you can store data.

Next post, I’ll look at the “aaS”s (IaaS, PaaS, SaaS) and how they differ.

Focal Point-CLM integration: how to find the properties you want

Previous posts looked at how to define REST view commands to integrate Rational Focal Point and the Collaborative Lifecycle Management applications. It’s pretty straightforward, if you can find the properties you want. Sometimes, you may not be sure what properties or data is actually available, or may want a particular property but can’t find it in the artifact’s template XML. In those cases, it’s helpful to use an interactive REST client to explore the artifact’s representation and discover its properties and associated resources (which often contain properties of their own that are of interest for the integration).

In the first video, we see how to use a REST client to query an RTC work item’s representation, and then to drill down into specific aspects of that representation, namely the resources associated with the work item. In the second, we see how to apply the knowledge of those underlying resources and properties when defining REST view commands.

Video 1: using the REST client to explore RTC artifacts

Video 2: defining REST view commands to access artifact resources and properties

My parting words on this: be curious, and give it a try.  If something in the artifact properties looks interesting, or you don’t know what it is, query it and find out. Set up a playground environment so you can try exchanging values between different artifact properties, and see what happens. I discovered quite a bit through simple trial and error.

Happy integrating!

Another FP-CLM REST view example

In the last post, we looked at defining a REST SYNCHRONIZE command to get work item properties from Rational Team Concert and update attributes in Focal Point. I figured we should probably take a look at the Requirements Management side of things also, so this time, the video shows how to define a REST PUT command to push values from Focal Point into artifact properties in the RM application of Rational Collaborative Lifecycle Management (CLM).

My environment is still using Rational Requirements Composer 4.0.4; I believe CLM is up to 5.0.1 now, and the RM application is now called Rational DOORS Next Generation. However, the basic REST APIs should work the same. I’ll just refer to “RM” to be inclusive.

In the video, you’ll see how RM assigns random string identifiers to properties, and how to work around that. You’ll also see some troubleshooting. Otherwise, the process is really very similar to that for RTC and for the other types of REST commands.


(tip: set the video settings to 720pHD for better resolution)

Defining REST view commands for Focal Point -CLM integration

Yes, I’m finally getting to the details of how you implement the integration between Rational Focal Point (FP) and Rational Collaborative Lifecycle Management (CLM)!

In the video below, I show how define a REST view command in Focal Point to extract data from a Rational Team Concert work item artifact and update fields in a FP artifact (REST SYNCHRONIZE). The steps for defining a REST PUT (to push values from FP to RTC) aren’t shown, but are very similar.

(tip: set the viewing resolution to 720pHD for better resolution)

Before defining REST view commands, you do require some basic configurations, which are documented in the FP online help (and maybe another post). The video also shows a pretty simple use case, where you are accessing plain-jane attributes in RTC, like text strings or dates. In yet another video, I’ll show how you can access “special” values for the work item, such as other associated resources and enumeration values.

The same steps also work for Rational DOORS Next Generation (RDNG, or Rational Requirement Composer in previous incarnations). That said, RDNG is a bit tougher, since any custom attributes you define are given a random string identifier, which makes it difficult for YOU to identify them in the Attribute Mapping window. To overcome this, assign unique values to those attributes in RDNG proper so you can recognize them when you do the mapping.