Maintainable Code in the Era of Low-Code

Maintainable Code in the Era of Low-Code

Omnext just announced a new service for customers to help them understand the code maintainability of applications they build with OutSystems. This begs the question, what is maintainable code when we’re talking about a low-code platform? In this post we’ll highlight why this is a very relevant question to be asking, and why companies like KPMG (who build many low-code applications and happen to be “OutSystems 2018 Partner of the Year”) take this seriously.

Speed vs. Quality

Speed versus quality is a classic balancing act that applies to the process of building software. For those who prioritize quality, it makes sense that there would be a healthy skepticism of development platforms that promise greater speed. “If a platform helps a developer build software 10 times faster, surely this will lead to quality issues,” might be the thought process.

To address this concern, it’s helpful to substitute the word “speed” with “automation.” The speed increases delivered by low-code development platforms relate to things developers no longer have to do. Various studies show that as much as 60–80 percent of development time is spent on repetitive activities. The way we write code to take data from a database table and present it on a screen doesn’t vary much from screen to screen. The way we ensure a screen displays well regardless of form factor tends to be the same for each screen. The way we ensure security, internationalization, accessibility… you get the picture.

There are many accelerators already built into the leading IDEs to help automate these types of activities by generating various snippets of code. Low-code development platforms extend this further by allowing developers to visually model solutions without even having to write code. The platform takes care of all the repetitive stuff. Does this mean the developer is no longer coding? Absolutely not. The developer still has to model the logic and data of the application, design the screens, connect to data sources, maintain good architecture, and so on.

Speed vs. Control

Assembly Language Macros

This claim of 10x development speed is often perceived negatively by developers who worry they are losing control over the “code” of their application. I think of it differently. My first coding job was in assembly language, and it took forever to do even the most basic things. When macros were introduced to automate instruction coding for assembly, there was a similar concern from developers: “I don’t like it! I’m losing control!” When I transitioned to C++, I didn’t once miss loading registers or trying to squeeze code into a 4k segment—I was able to focus instead on what the software needed to actually do. Then came Java. No more worrying about garbage collection or wrestling with apps that needed to run on more than one operating system or platform. This felt like nirvana compared to before.

Each of these language evolutions automated many of their predecessors’ tasks; tasks that consumed so much of my time as a developer. Instead of hunting down memory leaks, I could focus on solving the business problem. The automation that low-code development platforms provide is the reason teams are able to deliver software substantially faster. I would say that those who worry about this being a loss of control are just kidding themselves. To me, that’s like worrying about what happens to your Javascript code after it is written.

Back to Quality

Growing up in the U.K., the phrase “more haste, less speed” was commonplace, and I never really thought too much about what it meant. Thanks to automation, low-code enables a much higher degree of “haste” than lower-level programming languages. You can deliver meaningful outcomes to the business faster compared to manual coding. But, like any programming language, a developer rushing to deliver solutions with low-code can make mistakes. “Speed” without quality can result in a poor architecture that, in turn, leads to issues in areas like performance, security, or maintainability.

So, how do you ensure quality with a low-code development platform? It turns out that it isn’t really that different compared with traditional programming languages. Best practices are defined that have to be checked, either manually or, better yet, with automation.

Enter Omnext

Omnext is a company focused on providing automation solutions that ensure the quality of software written in over 30 different languages. They have developed a solution for OutSystems customers based on the ISO-25010 guidelines for software maintainability and quality supplemented with OutSystems-specific best practices. They help organizations answer the following types of questions across a portfolio of OutSystems applications:

  • How has the maintainability changed since the last sprint?
  • How do the code quality and productivity of my scrum teams differ?
  • How can I reduce my maintenance costs?
  • To what extent are components re-used?

Omnext Fit Test provides a snapshot of an app’s current software risks. Customers can also sign up for a regular cadence of risk scans with the Stay Fit program. Results of the analysis are presented in an elegant dashboard within the Omnext Portal. A point-based scoring system makes issues easy to identify.

Omnext Dashboard

Omnext has developed their own ORQA index, a quality score based on the impact of any best practice violations and the effort required to resolve them. Adapted to include OutSystems best practices gathered from years of experience across large portfolios of applications, this index can be used to help prioritize any remediation efforts.

By comparing points-in-time, you can see whether new issues are being introduced as your teams deliver new releases. To facilitate the fixing of issues, the Omnext Portal provides more detailed views to help pinpoint exactly where in your OutSystems model those issues were found. Since organizations may have different standards, custom rules are also supported to further tailor the results of the analysis.

Omnext Code Analysis Dashboard

Tangible Benefits of Understanding Quality

Intuitively, we want our software to be of higher quality. When it comes to security and performance, the potential repercussions of low quality are self-explanatory. Low-code maintainability may have some less obvious benefits, but they, too, are important.

Future Readiness

Applications that are built following best practice guidelines are likely to last longer and add value to their users for a longer period. They are also easier to adapt to changing requirements.

Reduced Maintenance

It is easier to maintain an application that is built well. Less time spent on maintenance means extra time available to deliver new, high-value functionality.

Enhanced Reusability

Well-architected elements are more likely to be used again, and reuse can increase productivity.

Improved Skills

Sharing quality insights with your development team helps them understand how to architect higher quality applications, raising the overall skill level of your team.

Ultimately, while the higher level of abstraction enables much greater speed, developing with low-code is not that different compared to writing code by hand. Quality still needs to be built into your development practices, and by automating your quality checks, further efficiencies can be gained.

To find out more about how Omnext can help you identify opportunities that might be hiding in your OutSystems applications, check out their website. For a limited time, they are offering a free Quick Fit Test scan of an application. For those who need more assistance, Omnext has partnered with KPMG so you can take advantage of their depth of experience with OutSystems. KPMG will conduct expert reviews of an organization’s applications and draft up detailed improvement reports.

Now, go in haste.

Written by Mike Hughes – Senior Director of Product Marketing, OutSystems.

Link to original post: https://www.outsystems.com/blog/posts/maintainable-code-low-code/

New! – Automated OutSystems quality reviewing with Omnext

As model driven development is emerging, OutSystems has become one of the leading low-code development platforms for developing both web and mobile applications. Model driven, or low-code development platforms such as OutSystems have one major advantage over their full-code cousins: rapid application development and full control over the application life cycle.

One thing must be kept in mind though: the delicate relationship between speed and quality. Building applications rapidly may result in applications that are of poorer quality. Even though low-code platforms such as OutSystems are closing the gap between IT and business, creating future-proof applications remains key when aiming for reaching maximum potential of these platforms.

Fortunately, OutSystems customers are realizing this exact point as well and they have been increasingly asking for governance on their software development processes. They are looking for answers to questions such as: How has the maintainability changed since last sprint? How does the code quality and productivity of my scrum teams differ? How can I reduce my maintenance costs? To what extent are components re-used?

In order to meet customer’s wishes and help them to reach the full potential that low-code platforms have to offer, Transfer Solutions and Omnext have joined forces to realize an automated quality scan for OutSystems.

Omnext has been specializing in analysing software applications for over 15 years and its technology, the Omnext Fit Test, is thus far already supporting over 30 different languages more modern ones such as Java, C# and SQL, but also legacy languages such as RPG and COBOL.

Together with both Transfer Solutions and OutSystems, quality rules were defined, based on the ISO-25010 guideline for software maintainability and guidelines on software quality, supplemented with OutSystems-specific best practices. The importance of these rules is adjustable and customers are able to add their own rules, hence enabling them to gain insight in the exact elements of an application that are of key interest to them.

After working together closely with Transfer Solutions and OutSystems for several months, Omnext has managed to add the OutSystems platform to its list of supported technologies and we are proud to present the result: The Fit Test 4 OutSystems.

The process to get insight in code quality, looks like this:

  1. Source code upload

A customer sends their OutSystems application exports (.oml file) to Omnext via a secure upload facility;

  1. Code scan

Omnext scans the models and code and presents the quality metrics in a user friendly dashboard, which provides detailed insight in a multitude of quality and productivity elements (see image below);

  1. Dashboard and Expert Review

The results as presented in the dashboard enable the customer to identify areas for improvements and if required, Transfer Solutions can offer its expertise in the form of Expert Reviews to really help customers take the next step in improving the quality of their applications.

The Fit Test 4 OutSystems can be executed as a single scan, but also as a repetitive cycle which allows customers to analyse their applications on a frequent basis. Especially in a continuous delivery context, having continuous insight in the quality and development of your application proves to be of great added value. To further support this in the future, Omnext, together with OutSystems is already exploring the possibility of using an API to automatically transfer the application sources to Omnext and have them analysed on the fly.

We believe that the Fit Test 4 OutSystems is a perfect addition to the OutSystems Platform. With it we ensure that our customers can deliver high quality OutSystems applications now and in the future.

If there are any questions in response to this blog post. Feel free to contact Omnext or Transfer Solutions.

Jacob Beeuwkes, Transfer Solutions
Francis Jansen, Omnext

What is the quality of the Mendix App Store modules? – Part 1

That’s the question we asked ourselves when we got more and more enthusiastic about the ability to check the quality of Mendix models. The solution we used is the Fit Test for Mendix provided by Omnext. The goal of the Fit Test is to analyse the quality of your Mendix applications, both based on ISO 25010 as well as a broad set of Mendix Best Practices & Guidelines with regard to maintainability, performance, security & conventions. Not only does it indicate any violations, but it also tells you where the violations are and how to resolve them. It’s basically an automated peer review that goes through all the nitty-gritty details every time you perform a check. We asked Omnext to do an automated scan on all modules in the Mendix App Store using the SDK and we love to share some of the results.

So what are the things we looked into?

Mendix Best Practices & Guidelines

The Mendix Best Practices & Guidelines are defined together with a lot of Mendix developers and MVP’s (= Most Valuable Professionals). Some are common and simple but often forgotten, others are more complex. It’s a given fact that we are all humans and these errors happen to all of us when we develop and deliver at the speed of light.. In total we checked 46 modules that ran on Mendix 6.6 and higher.

Overall score very high

Our first conclusion is that the overall quality of the modules in the App Store is high. Over 90% of the modules has a score of 4 stars (out of 5) or higher. This is high as the average score for software in general 3 stars. The power of model-driven development helps to bring in those high numbers.

We also looked for any blocking violations. An important one is “Avoid commit in before and after commit actions”. On Mendix entities you can have Events (before/after create/commit/delete). These events can help you to keep your model consistent and are often used to do calculations/checks (instead of calculated/microflow attributes). If you commit the same object in an event it will trigger the event again and again and again. Your app will freeze and will get into an infinite loop. So don’t deploy any app with this violation..

Perfectly, none of the modules in the App Store have this blocking violation!

Highest number of violations

A very long list of best practices is checked with the Fit Test. The one with the most violations in the App Store is ‘Avoid commit in loop’. We found that almost a quarter of all modules in the App Store violate this rule, some of them multiple times. In certain cases this can reduce the performance of the affected modules.

Why is this wrong? Well, when you commit items inside a loop it will generate a command to the database to update/create a record. This is potentially a performance killer when you have a big list to iterate. The Fit Test makes you aware where you made this error and explains you can correct this issue by:

Step 1) first create a list

Step 2) add your items in a loop

Step 3) after the loop commit your list.

The result is a better performing microflow and a developer who gained some further knowledge. Great, isn’t it?

picture1

This post is the first of a series of blogs about how to improve the quality of Mendix applications. Watch out for the next one where we will tell you more about the other best practices like naming conventions and their advantages. Furthermore we will tell you how you can access the Omnext portal so we can jointly work on even better solutions for all Mendix users.

This post is written by Appronto our Mendix Technology Partner.

1+1=3!

1+1=3!

Contemporary static analysis tools such as SonarQube can be significant in terms of quality improvement of your source code. With the proper use of these tools, the introduction of new technical maintenance risks is largely mitigated, which can yield significant savings for your organisation in the long term. However, a major disadvantage of these tools is that the identified risks and violations are too focused on one, or a limited number of technologies and are too technical in nature. Your Risk Management department will often be aware that these tools are being deployed, but for them the applicability of the tool is (probably) nil. The reason for this is that the tools do not provide insight into functional risks.

The effectiveness of such an analysis tool could be much greater if, in addition to the technical risks, the functional risks are set out.

 

What has the functional question to do with the source code?

What if you assume that a piece of technical source code is nothing more than an implementation of a functional question? Then you might conclude that functional risks may be derived on the basis of technical risks? The answer to this question is: yes, and quite rightly so too.

 

But why would you want to map out functional risks? The following five answers may be given:

  1. Because the user thinks and speaks in functional terms and not in technical terms;
  2. By nature, change requests are almost always functional. “The user wishes…”;
  3. The impact analysis of a Request For Change (RFC) may be estimated much better if the person handling the RFC has facts related to the source code affected by the change request;
  4. The quality of an technical adjustment may be monitored retrospectively with a follow-up measurement by monitoring the functional change and setting it off against the expected impact. In other words, the circle is completed: you expect a certain impact on an RFC, the RFC is implemented and afterwards you check if your estimate has been correct. As an organisation, how learning can you be?
  5. Customers who have outsourced their systems’ management are keen to monitor the quality development of their system but do not have the in-house technical knowhow i.e: the supplier does not provide transparency and openness.

So it would be nice if you could have a tool in which it is possible to store multiple technologies and to visualize the source code such that the end user also gets a functional look at the system.

 

How does this help close the gap between the end user and the techie?

Can this be achieved and how does such a thing work in practice? Yes, from my experience, I can say that this is definitely achievable. In doing so, the various parties are involved and the following steps may be taken:

  • The Functional Management department draws up a list of all functional components that are stored in the system.
  • The Development department draws up a list of all technical components that are stored in the version control.
  • Both lists are related to each other via a link. This is a manual process for which it is necessary that both parties get round the table together.
  • The result of the link is set into the tool and the quality measurement is conducted.

In this way the classical gap between the end user and the techie is bridged. The tool is required to support many technologies (from classic programming languages to workflow, 4GLs, document generators, etc.) and must be highly adjustable.

Then and only then can this gap be bridged and the sum “1 + 1 = 3” be made!

 

Frans van den Berg

Frans van den Berg

Principal Consultant

Frans is Principal Consultant at Omnext, specialising in source code analysis of software applications using Fit Testing and Stay Fit programs. Frans has extensive experience in evaluating the quality of software systems including applications of government agencies and financial institutions.

Any questions? Then please contact Frans.

6 security points to consider for your software

6 security points to consider for your software

Would you say that the above presented building is a safe building? I would probably not. It might even be more unsafe than I originally thought. This issue does not only count for buildings, it also count for something we’re dealing with every single day: software. Nowadays security in software is a significant issue. And rightly so! Have you ever stopped and considered that security risks could also be lurking in the basis of your software – e.g. the source code? We touch on 6 points from our experience, in order to help you get started with eliminating risks. Safely towards the future!

 

1. Be aware of the changes that are made in the software:

Our experience shows that change introduces risks. We live in a world where change is cemented into the foundation of any business. We all want to innovate, renew and above all, be able to meet the demands of our (potential) customers. In order to ensure this degree of flexibility, it’s a requirement that IT is able to grow in the direction of the business. This means frequent programming in your software, which in many cases is the engine of your business. Are you aware that each of these changes increases the likelihood of a security risk? In itself logical, and fortunately there’s plenty to do in order to ensure that you remain in control.

 2. Outdated software:

Did you know that outdated software is not only a business obstacle, but also poses a risk in terms of security? Outdated software is often not developed in the spirit of “security by design” and may cause systems to become less stable and reliable over time. Of course you always want to rely on your software, especially in times when a lot has to be changed in the software.

 

3. Open source:

The use of open source components is becoming more common. We, (in our eyes) like every IT company, also welcome this. Yet there are still some snags with the use of open source. Do you know if your open source components are all up to date? Maybe security issues were found in previous versions and you’re running a greater risk than you might think.  It all starts with the question: what open source components do I use, and what open source components use these components? Do you know?

 

4. Specific source code security risk: SQL (Structured Query Language) injection:

These are all great points, but what exactly can go wrong in the source code? Well, take a so-called SQL injection: a hacking technique that is often applied to applications and websites. Using such an injection, data may be extracted from a database, changed and in some cases you can actually end up losing control of your server. Thus there are loads of things that can cause headaches. What can you do about it? By protecting your source code in the base, you can make it impossible for outsiders to enter your source code. To do this, you first have to have a full understanding of your source code and source code reviews can give you this.

 

5. Secure Programming: everything must be consistent

With a transfer of money, the amount deducted must be equal to the amount to be credited. If not, then money has disappeared. This comparison also applies to the data that handle many applications and this applies to each application in its own way. Of course you know this as a developer, but under the overall pressure of time, this happens more often than we would like. Consistency in programming is a factor that contributes to the reliability and security of applications. Are you aware of how consistently your programmers set about things?

 

6. Control and understanding: you can only eliminate risks with a full scope

A review of source code provides an overview of risks and other imperfections in the source code. Such a review may be carried out manually and automatically. In many cases, carrying out a combination of both is recommended. Why? According to experts, an automated review is indeed fast, but it often produces false-positives, and it won’t recognize any weaknesses. Well, we would like to convince you that in this, Omnext provides a complete service: a rapid, objective and automated review overseen by in-house experts who are able to remove false-positives, can discover patterns and at the same time can focus themselves on your situation and what results are important for your organization. Thus customised advice, based on objective observations of the entire source code. Know where you have to start!

Which of these points would you tackle first?

 

Anna Willems

Anna Willems

Brand Manager

Anna Willems is brand manager at Omnext, expert in the measurement analysis of the source code of software applications with the aid of Fit Tests and Stay Fit programs. Anna has a clear vision on combining IT and business – from a marketing perspective – and how information from the source code can support business incentives.

Any questions? Ask Anna!

Source code revitalisation, a diet for Uniface® applications

Source code revitalisation, a diet for Uniface® applications

I’ve just been inspired by our minister for Public Health Edith Schippers who is calling for our food and lifestyle to be made healthier.  For me, every other day is an opportunity to pick up issues that haven’t been addressed before. Personally and given my age, I think that I’m in quite good shape J. So I’m only focussing on software, for example a diet for Uniface® applications.

Using up source code

A characteristic for applications built in Uniface® is that they have invariably been contributing to critical business processes of organizations for many years. In fact, these types of applications are functionally everlasting.  A characteristic of these types of application that have been in operation for a long time, is that they suffer from a form of source code overweight. This is because software developers rarely have the opportunity to achieve a balance day, which is needed in order to remove excess functionality from source code. Functional fat then automatically occurs on the application body.

Experienced, but also Fit?

It is expected of these applications, which include extensive knowledge of and about the business processes, that they can move with changing demand and technological developments. Fortunately, the Uniface® development product provides opportunities for this, thus a vital old age is within reach for these applications.

The diet …

In order to keep your application up to par, it helps if you first get the application to the right weight. This can be done by identifying source code overweight, such as the fat in complexity, dual and unused source code.  After this, it’s helpful if the training is focused on losing weight around the vital parts. Experience shows that this usually only covers 20% to 30% of the application body.

Take action!

This brings me to the older but socially so important applications. They’re also entitled to a vigorous old age. When the world power may be in the hands of a pensioner who uses the latest technologies, then our software crown jewels certainly deserve a vital future. This in an eco-system of new technological applications.

Jaco de Vries

Jaco is director at Omnext, a company that investigates the vitality of software applications with the aid of Fit Tests and Stay fit programs.

Jaco de Vries

Jaco de Vries

CEO Omnext