How NOT to build secure applications

How NOT to build secure applications

Experience summary of finding and reporting a security vulnerability in customer portal of a leading asset management company.

Context

Security is not optional in today's world. And if you are here reading this, I am sure you are already aware of that. Hence, I will not talk more about the importance of making security a core part of your product development process. This incident happened when I logged in to the customer portal of one of the leading asset management companies. The portal had a search field, like many others that we see daily all over the web, and all was going well as I searched through the funds until I searched the name of the next fund.

Not doing enough testing

The only thing different with the next search keyword was that it had a single quote in the name. And that's it. That was enough to break the search implementation of a major asset management application. As soon as I entered that keyword, I got an error popup on the screen with the entire SQL query in the error message and I knew exactly what had caused this.

The single quote.

Now, search-related features are always tricky to test but thankfully a lot of content is already available on testing heuristics for such features for example; searching with spaces, special characters, numbers, etc. The team also didn't need to test from a security perspective to figure this issue out, it could have been a simple functional test case by sampling available fund names to figure out some test cases.

Apart from exploratory testing and penetration testing, there are also specialized static code analysis tools that could have helped identify this vulnerability in code during the development phase itself.

But somehow it was missed and mistakes do happen. And this SQL injection also might have gone unnoticed if the runtime SQL exception had been handled properly.

Allowing low-level errors to be passed onto UI

But it wasn't. As the search failed, what caught my eye was the weird error message that I saw on screen.

StatementCallback; bad SQL grammar[<Entire SQL query here>]; 
nested exception is org.postgres.util.PSQLException: Error: syntax error at or near Position: 799

I had reported SQL injection vulnerabilities in a few of my previous projects while testing, so I was well aware of what this meant.

wow! A SQL injection vulnerability! that too in 2021. I didnt see that coming.

The difference between my previous experiences and this one was that the SQL query could be known only if you had access to application logs; the service itself returned a 500 with "internal server error" as an error message. But in this case, the entire SQL query was returned in the error response and shown on UI in the error popup. Boy, that was a surprise.

This could have been easily prevented if there were exception handling in place, that returned generic error messages to the web client while logging the exceptions for further debugging.

But that wasn't how the errors were handled and all this would still have been okay if the response to the security incident report was prompt.

Not creating processes to handle reported security incidents

And again, it wasn't. Like any good QA, I reported the issue to the team so that they can fix it on priority before this was known to any malicious users (if it wasn't already).

But there were a few hurdles:

  • There was no option to report security-related incidents, so I had to email customer care with the issue that I had found.
  • Since I was aware this was a technical issue, I asked them to forward this to the development team so that they could act on this.

What followed was a series of email conversations and phone calls for the next 3 days, where I had to explicitly tell them which search field had the issue and demonstrate the vulnerability by entering SQL queries in the search field (had to spend time to figure out the Postgres SQL syntax).

At the end of all this, what I expected was a note of thanks and a commitment to fix the issue asap, but what I got was no response (seriously?. Seemed like I was more worried about the vulnerability than the team itself.)

If Day0 was the day when I found this issue and reported it to the team, It was fixed and deployed to production on Day51. At least that's when I received a communication from the team that the issue has been fixed (I had almost forgotten about this event).

I retested it and the issue was fixed, and hence I am writing this now to share my experience.

How to build secure apps

If you build products that are exposed to the external world, it's very important to build a security-first mindset in the team and this does not mean only doing penetration testing at the end of the cycle or using random passwords. Any team that is security-aware would think about the security of the application while:

  • Writing user stories: are there any use cases where authorization is necessary
  • Doing technical design: how should the application handle unauthorized access, what data is being exposed through APIs,
  • Developing features: What information should be logged, are we passing PII information as query params
  • Testing: Can I expose any security vulnerability by leveraging tools such as ZAP, Burpsuite, or by applying knowledge from OWASP top 10.
  • Building and releasing features: Have we done threat modeling exercises with the team to identify any possible security gaps, Are there any static code analysis tools that can be integrated with build pipelines.
  • Preparing training guide for production ops: how to handle any reported security event, how do we monitor a production environment to identify malicious activity.

You can read more about security practices here: thoughtworks.com/security