Thursday, November 26, 2009

Spammy Messages on Exception classes

I need a smarter ABAP developer to tell me this. Why does it take 10 steps to link a t100 message to a particular instance of an exception class? For every message text, you have to define the exception variables within the exception class, then create the exception text, then create a t100 message, then link the message and each message variable( this is the painful step), then raise the bloody exception class with exporting parameters specific to that message. It's a pain. Alright to all you lucky guys on ECC7 and ECC8, have they made this easier? SAP, At least put a table control grid to so I can link the t100 message variables easier. My clients won't understand why I took 3 days just to create messages the "right way". I've given up and decided to raise "message into lv_dummy", raise exception class and reevaluate the sy variables using a static method from a crossapp utility class. I cut-paste it everywhere...job done.


- Posted using BlogPress from my iPhone

Story of a Landscape change and Transports for a major release

I’m going to recount a recent experience with transports and a landscape change. I’m reasonably happy to be still learning.

Background
We started a new integration project and due to the large number of developments, decided to deploy the project's development environment in a separate instance. Developments and configuration are then migrated ithrough the normal “Business-as-usual(BAU)” DEV,QA and Prod as part of a change control process.

We also have Revtrack which helps somewhat in warning us of object contentions.


The first event
A significant change in landscape required us to consolidate the development in the BAU environments. It was a reasonable choice except for some issues below.

Here are some of the problems we encountered

  • The transports have to be imported into the BAU environment and old transports in the project system has to be quarantined in Revtrack.
  • The source system of development objects have to be changed so that they don’t get classified as repairs (This is done through SE03).
  • Some of the data remained in the Project Environment which meant that in order to test, new transports have to be released. This led to a proliferation of small transports.

Because of the mass of new transports during testing, the migration to the QA environment was projected to be quite tedious. Revtrack will OOPS all over the place. I decided to create a “SuperRevtrack” to consolidate all the related developments under my area which reflects the latest state of the development environment.

Due to the SuperTransport, the migration to QA went well and integration testing started on track. It solves one problem, but creates a big one later…


The second event
During integration testing, in order to manage incremental changes, the SuperRevtrack was frozen. Changes were created in new new Revtracks.

This was done to allow different developers to migrate their changes to QAS independently.
Unfortunately, the masses of transports came back to haunt us.

Revtrack will release to Prod in the order that transports were released.
Despite this, we knew that adjustments on the data dictionary objects would kill the Supertransport and items will not activate properly on the first run.

The only way to ensure that everything activated properly was to transport them all…TWICE.

The risk
The main issue with the migration is that it contained pricing routines and pricing condition tables which had the potential to stop SD in production in its tracks if it didn’t go in properly. Further, there were also extensions to customer data, pricing and output communication structures. Warehouse interfaces to SAP using RF and .Net connector is likely to be impacted by code regeneration due to the communication structures.

As the migration was considered a high-risk move to stop production capability during daytime, the changes went in on a weekend mid-night with a full complement of support staff monitoring the change.

We acquired a special approval for a 2 hour maintenance window (the standard one was 1 hour). All the weekend background invoicing and rebate calculation jobs were stalled. We had 21 revtracks with 50+ transports and one supertransport containing 300 objects.

Luckily, it went in as planned. 1hour 45 minutes.We had 15 minutes to spare. Production did not fall over.

Later we were notified that the development environment was being maintained at the same time and it would have been potentially disastrous had the Production migration did not went in as planned.


Lessons

Here are some lessons I've learned here and in previous experiences.

  • For the first release in a major wave, plan and build the transport / release pack beforehand. (objects on subsequent releases are typically less cross-dependent) .
  • Data dictionary and cross app objects can be put in separated revtracks. Putting these in their own transports saves on the analysis costs for transport dependencies on programs (analysis costs are incurred for every migration stage).
  • Request a separate transport for the pricing condition table definition or include it in your common data . This is typically generated by the Pricing Functional team.
  • Revtrack cannot manage the migration to prod by itself. It’s a good feature that it transports to production based on release date and it’s good for small support changes. But big releases for projects generally would still require transport level analysis.
  • Creating data in the development environment saves time. The functional team may not always like it – but it’ll be better for them and for support later on. It also saves on the number of transports to track.
  • The whole transport path must be up and running for a major release pack.
  • The Basis support for the night must have special approval to move emergency items into production if things did not go well.
  • Basis folks do not like super-transports because it is often hard to determine whether it is stuck in the system or still processing.

Wednesday, May 20, 2009

Looking for a laptop

Weng’s old laptop is about to die. It’s an old VAIO with a pre-dual core chip. It’s gone through a warranty power-unit change. The mic is deaf. The video keeps switching over to the low-speed USB hub driver. There’s a crackle on the speakers if you turn up the volume for more than half the bar. It’s got more spyware and trackers on it than poo of a street dog has flies. And most of all, Caeden (our 2 year old) had a go at finding what’s under each key. I never got them all back in.

We went over to Dick Smiths to browse over new ones. I kinda liked the Macbooks so we went and inspected it. We previously owned one but if the thing was a car, you’d rather take a walk. These new ones are pretty cool. But I kept thinking that I’m going to raze it and put Windows on. So if we bought it, we were pretty much paying top dollar because we thought it was shiny.

There were netbooks there. Weng’s first impression was “Look - a computer for Caeden”. But I shook it and thought… No, Leapfrog build their baby-computers a little bit sturdier. I thought it was perfect for what Weng wanted to use it for. She mainly uses it for word processing and surfing. It might struggle with the 20,000 photos we have in our library, but we have another unit for that anyway.

Five years ago I drooled of the thought of netbooks. But now that they’re here, I couldn’t be bothered. It really was only appealing in my head five years ago because, in my world, I was the only one who had it.

Moore’s law is still an amazing thing. I could probably run an early SAP version on a netbook with a 2GB and a 60GB harddisk. Enterprise computing in your palmtop. :)

Anyway… I consider a laptop like any useless household gadget. It’s primary purpose is to make the user feel good. Oh yeah, occasionally you make a toast or make a smoothie. But most of the time, you’re just sitting there grumbling about how you’d wish you got the better one with that feature for a few bucks extra.

To date, we are still undecided. This is largely due to availability of other computers at home. The low-end option is the Dell Inspiron laptop which ticks all the boxes under a 1000$. But something tugs at me. The Macbook is sweet even at more than twice the price. If we end up there, I'll just say that iLife does not have an equivalent in PC.

Monday, April 13, 2009

Supply Chain Optimization on Information

I was lucky enough to participate in a supply chain session once where Damian Jones, FMCG President of Linfox, spoke about the challenges facing the Australian Logistics industry. Some of the things he talked about are the changing economic environment, upward cost drivers for logistics, climate change andlogistics skill shortage.

One of the things I remember in that talk was the argument that it is far better for FMCG manufacturing companies to outsource their logistics because it is not a differentiator in the marketplace.

There are global-optimization arguments against that statement, but they can be overcome by appropriate SLA’s. Jones actually touched on the optimization problem when he mentioned that the demand for toilet paper never changes, but the variability in the demand for logistics services transporting toilet paper is quite high due to localized promotions.

At the time, my client was a food company engaged in talks with Linfox to put in a national DC. Informally, I was asked the question whether the related IT infrastructure is better off outsourced as well. Of significance is the Warehouse and Transport Management Systems. There was a significant investment in SAP prior to the talks with Linfox which complicated the decision.

In return, I asked if holding the IT infrastructure had any strategic significance.

I believed it is quite different to outsource transport information management and to outsource actual logistics. While outsourcing logistics is outsourcing service, outsourcing warehouse and transport information can be viewed as relinquishing more control.

I was not there to see what eventuated. I suspect WMS will be outsourced to Linfox as it mainly holds only operational concerns. Interfaces will be built with bits of transport information on both sides of the fence due to the prior investment in SAP. None of this touches on stock optimization across the chain. It is mainly controlled from the food company.

Agreements on stock levels and Promotions typically are between the manufacturer and the groceries. This would have been an innovative opportunity, as putting the 3PL’s in the loop can potentially provide further transport optimization benefits. But there are risks. I think the risk of losing tactical edge by sharing promotions information to Logistics is a big psychological barrier that manufacturers face. And controls around tactical information are held sacred in Marketing and Sales functions in firms.

Tuesday, March 31, 2009

Top heavy vs Bottom heavy

Someone once asked me which do you prefer : projects which are “top-heavy” or “bottom-heavy”? He was referring to the project management context. My knee jerk reaction was that nobody would want either. But thinking about it later allowed that the challenges that firms face plays significant role in how projects can be effectively organized.

Top-heavy systems have an excess of management resources. It potentially risks having a bigger hierarchy with a corresponding loss in efficient communication. Bottom-heavy systems have more task-specific professionals. It potentially risks underutilization of resources due to lack of tracking and loss of direction.

Some roles can be classified as part of “management” hierarchy and give the appearance of top-heavy projects. Roles such as Project Management Admin, Change Management, Bridge-to-business, Training roles can be classified as management lending the appearance that projects are top-heavy. There can be no one-size fits all guideline on whether the roles should require a specific specialist or should be consolidated into team lead/project management roles. Project size and complexity dictates whether such roles are required.

In my view, projects that are true-top heavy are characterized by low span of control and many levels of hierarchy, not withstanding specialist project roles mentioned above. In my experience, very few SAP projects in Australia are run this way. The projects that I have had the privilege of joining have dual hierarchies representing joint ownership between clients and the implementing partners. Most often, the inefficiencies arise out of role-conflict between parties owing to different parent company allegiances rather than the sheer number of personnel – whether specialists or management oriented.

So if the number of personnel types is not the primary question to ask for effective project organization, what is?

The top-end plays more than just an administrative role. They play a role in developing a position for long-term goals. In self-managing teams, the top-end administrative tasks are almost negligible. The bottom end, on the other hand, concentrates on the finishing the task at hand. So for example, if the project relates to long-term strategies impacting values on market positioning or internal culture change, then do not shy from a top-heavy structure. In contrast, fairly task oriented, specialist development can be done bottom-heavy.

The challenge that the firms faces plays a significant role in how projects can be effectively organized. The two ideas I am advocating here are : (1) position management levels for strategic leadership and ; (2) enable task specialists to effectively self-organize and adapt to changing market requirements.

Sunday, March 29, 2009

Break... F1

Away from the business app issues this week.

Formula 1 was on. I think F1 did a great thinking with the introduction of KERS and the capping of the team budgets. The introduction of KERS showed some surprising agility from F1 to respond “in spirit” to the global climate change issue. The capping of testing time and team budgets is a partial response to the global financial crisis. It also serves the F1 well to level the playing field slightly and make the F1 slightly more watchable. I agree with it. There’s an argument about F1 about being the best of the best. But there is a point of decreasing returns at which it takes a horrendous amount of capital and human energy to gain a marginal improvement. At which point, the best of the best argument gives way to responsible leadership.

KERS – Kinetic Energy Recovery system is a technology that allows the cars to store kinetic energy under braking and release it for acceleration. The teams had scrambled to get it developed. The top-6 at the finish line did not have KERS on which only goes to show the technical difficulties in developing the technology.

The rave of the race is the 1-2 finish of newcomer (Virgin!)Brawn-GP. Ross Brawn is the former Ferrarri technical director from Schumacher’s winning days. He bought the former Honda team when it bowed out of the league last year. Jenson Button and Rubens Barichello were staring at the end of their careers when Brawn GP picked them up and gave them the ride. Both drivers were ecstatic and did not disappoint.

‘Rock-star’ Virgin CEO Richard Branson was prominent at the event, having picked up the sponsorship only days before. Brawn-GP’s an awesome name, but I reckon Virgin Brawn would be a much bigger advertising coup. Virgin Brawn being a play on words that means “raw power”.

Local hero Mark Webber of Red Bull Racing had a frustrating meet. He crashed into Rubens on the first corner and was relegated to the back of the pack for the rest of the race. (Good on him for finishing though) . Teammate Sebastian Vettel wasted a chance. He was second late in the race when failing soft tyres resulted in a corner collision with Robert Kubica.

The McLarens cars were pitiful. Both cars were out of the top 10 in the qualifying. Mclaren star Hamilton did an amazing job to pick up third after having had to start at the bottom.

It will be an interesting season. There is a controversy about the (Virgin)Brawn-GP rear diffusers. Oz-GP allowed it, but expect it to be challenged in the FIA courts. With Ross Brawn’s clout however, it’ s bound to be carried through.

Tuesday, March 24, 2009

Insure against Knowledge loss by using Collaborative Tools

I was bemoaning the lack of collaborative tools in one of my clients. The recent impact of the economic crisis has downsized the company and the job cuts has gone to an extent where critical skills and knowledge has been lost.

In a recent exercise, an interface to a manufacturing subsystem has to be modified as a machine has to be allowed for the interface. It just so happens that the same machine type has been used in other sites and for a slightly different manner. No available documentation pointed to the business rules. With the help of business colleagues, it took me 2 days to understand the dilemma and the exercise to manage the change has not yet been completed to date.

In this particular case, there was sufficient skill left to piece together a good-enough picture. But there are other areas that are not so lucky.

I was thinking perhaps if the company had a library of wikis and blogs, the effort would not be so hard. Collaboration tools has been in the market for a while....Sharepoint comes to mind (See Sharepoint and Enterprise 2.0: The good, the bad, and the ugly by D.Hinchcliffe).

Australian firms value the nature of human capital as evidenced by emphasis on HR and Talent Management Systems. It is a far bigger challenge to change the culture of firms to take advantage of collaborative systems so that they capture the value in complex relationships and highly unstructured information.

Thursday, March 19, 2009

Beautiful design

I am a regular visitor to TED.com. Recently, I ran into an old talk by Dan Norman. He spoke of designs that make people happy to use them or own them. He spoke of the problematic Jaguar that every owner loves. He spoke of the orange juicer he used as a home decoration piece.

The message goes on to explore why the functional capabilities are not the only basis for beautiful design.

I wondered how to extend the sentiment into ERP application design. Paradoxically, the one biggest strength of structured applications is that both processes and applications are centered on functional capability. The complex nature of business processes means that a generalization of commercial applications tend towards the complex. There is a tendency to trade-off intuitive usability.

In contrast, Dan Norman's example of the o's in the Goooooogle search engine is simple, almost unnoticeable to the user and yet delivers the search functionality it wishes to deliver. Beautiful intuitive design.

Wednesday, March 18, 2009

Rethinking the environment

I am writing this in the middle of the global financial crisis. The media already compare it to the 1930's recession in the United States. I don’t blame them. With 650,000 jobs lost over the last month in the US, it could well be.

At any rate, it is a good time to think about how to approach the changing environments for those of us who are independent consultants. Our living is based on servicing companies with an appetite for innovation. In the current environment, capital is tight and we should be mindful that there is a high bar for the rationalization of projects.

The only respite is that the crisis will force companies to change. Perhaps this is even more so in a volatile environment, as companies adapt to resize their capabilities to match the shrinking market size. Alongside that change is a demand for the update of IT systems. And there will be demands for optimizations of IT systems to do more with less. And there will be demand to automate transactions to alleviate load pressure resulting from reduced workforces.

There will be a temporary demand for support roles instead of innovation/project roles. Independent consultants who hunger for the leading edge projects need to assess their choice for roles as the nature of availability will change.