Is mobile development ready for graphQL

Most web and mobile applications institute client – server communication. Acquired data are used in the app’s internal processes. Nevertheless, you must know that the beginnings were tough and didn’t provide any models of proceedings. Engineers tried to create a standard which would make the manner of communication more legible and easier to implement. Examples of such standards are REST and SOAP. Unfortunately, nothing in the world is perfect. REST is used most frequently being a main player and overtaking slightly older SOAP. However, years of using REST have shown its shortages which arose due to changes in the world of web and mobile applications. Main instigators of those alterations were mobile apps which constitute larger and larger market share. 

Progress = new problems

Mobile devices have become a chief tool for using the internet resources, yet smartphones have limited computing power and memory. Although their efficiency is constantly rising, the level is still much lower than in laptops. Another constraint is the battery that always runs out too soon :). That’s why we want the apps to encumber batteries in the least. After initial analysis, it has been established that a triggering factor can be the amount of uploaded data. We send too much and not use enough of it.

Somebody had to come up with proper solutions. In 2012 Facebook app had a substantial user flow so the server – client data exchange was extremely large. 

Facebook programmers decided to solve the problem by creating a new manner of communication which would limit the amount of sent data.

Appropriable reasons were:

  • higher usage of mobile apps
  • low power devices
  • impaired internet connection which makes it difficult to send larger data files

Technical reasons were:

  • over-fetching
  • under-fetching
  • alterations generated in API REST

GraphQL – new solution

So, what is this new Facebook tool? Actually, it’s a query language based on a type scheme created with API. Data is presented in the form of a graph – and that’s where the name came from. The solution was published in 2015 as an open source. Today, it’s developed and available in all popular programming environments. As opposed to REST, GraphQL uses one endpoint which provides all the server-side data. Decisions about the number and type of data are transferred to the client-side which means that the app decides what it wants to get from the server in this particular moment. 

Major REST’s problem is a consistent scheme of acquired data. Example: App displays user’s name, so it downloads data from API. In REST, endpoint response has a consistent number of attributes compatible with the user’s data. In GraphQL, we are able to determine what we need from all the available data reducing the business of the response. In the presented example, what’s downloaded from the server is only the user’s name and nothing else. Using such a solution, programmers don’t have to modify the backend and create a special endpoint to provide only the user’s name. 

Solved technical problems:

  1. Over-fetching

The problem lies in downloading data’s structure which consists of a large number of attributes. We have data that will be used in the application.

  1. Under-fetching

The problem is that downloaded data is insufficient. That’s why we have to send another query to the server and ask for additional data.

In both cases in GraphQL we only download the data we currently need.

Changes in API

Created endpoints in REST approach have a defined structure. To download additional data, programmers have to modify existing endpoints or add supplemental ones to fulfill the needs of the client’s layer. Besides, the client’s layer is not resistant to unexpected changes in endpoints which can lead to instability of the app’s operation. This kind of process is an additional burden also for the client’s budget. 

Summing up

GraphQL approach solves basic problems and is used more and more frequently. Will it substitute the REST approach? Rather not, but it’s a stron alternative showing that some elements can be managed better.

Tests automation in a project – when and why?

The world of programming is characterized by two types of tests – manual and automated ones. Some of the manual tests, eg. usability or exploratory ones, are an indispensable and inevitable part of a tester’s work. Nevertheless, if we’re talking about regression or smoke testing, doing them manually, especially if we deal with an advanced project, can be extremely time consuming or sometimes even impossible. In that case, automated tests come to the rescue. In this article we will focus on their pros and cons.

What are automated tests?

Automated tests involve testing cases performed by formerly written scripts. With their help, we are not only able to script the “clicks” around the web application. We can do much more. To understand them better, it’s best to distinguish some of the most popular types:

  • unit testing
  • API testing
  • integration testing
  • end to end testing
  • safety testing
  • performance testing

Let’s take a closer look at some of the most important ones.

End to end testing is the best way to make sure that a particular system works properly. It doesn’t check other methods operations or a number of times a certain function has been performed. What matters the most is the end result – a fact that a product has been added to the basket, an order has changed its status, a user has been logged, etc. Such tests recreate an end user’s actions which can be quite beneficial. The purpose is to go through the processes using business logic to confirm that the implementation went well. 

Performance tests don’t verify the functioning of buttons, fields, forms, and rolled-out lists. They examine the application’s behaviour in various load conditions – a load being a situation when x users use the same function simultaneously, eg. visit the same view of the app (searching through products or sending a return form). It’s worth adding that performance tests can be divided on the grounds of the amount of load and their purpose:

  • Performance testing shows the system’s weaknesses and helps establish the, so called, “bottleneck” using low and high load. 
  • Load testing verifies how the system works under the maximum load and if it’s even capable of performing.
  • Stress testing verifies if the system launched eg. a safe mode and if it meets the safety standards. In this case, the load is definitely higher than the maximum. 

Automated tests – pros and cons

Automated tests are indispensable when it comes to complex and long-range projects. The cost of creating tests detecting regression and resets errors is much lower than in case of hiring a manual tester for the same job. It’s efficient both money- and people-wise. The crucial employees are able to work in smaller projects in which it’s usually more profitable to perform manual tests. 

Preparing automated tests allows better analysis and error reporting thanks to easy and simple access to the history of tests results. However, automated tests are not only the land of milk and honey. Initial phase requires investing more time and money. Maintaining, performing and developing the scripts also generates costs but they are still lower than manual regression tests in long-term projects. 

What’s most important – automated tests can’t substitute a real person. Their purpose is performing the same test cases quickly but they won’t fill in for exploratory tests based on the tester’s imagination and experience. Automated tests usually cover only the basic test cases but allow a large number of test data and provide a software version faster. Additionally, testing new functionalities is typically performed manually to provide a thorough knowledge of a certain area. Later on, we can implement the automated testing, too.

Summing up

Following the market trends, it’s hard to say what the future of automated tests will bring. However, taking a closer look at the world’s giants that develop the automation processes, machine learning, and artificial intelligence, we can say that investing in manual testers’ development towards automated testing is a must. Despite the cost of creating these tests, they are inestimable while completing complex, long lasting projects. 

It’s good to remember that they don’t downgrade the tester’s qualifications. Quite the opposite. At the end, a tester is a person with full, in-depth business knowledge of a product which, as a result, provides the end users with satisfaction and is considered as highest priority in software quality assurance in the World Quality Report.
Do you want to implement automation tests in your project? Feel free to contact us, it will be our pleasure to cooperate with you. Together we can deliver everything!

Azure Functions – On Cloud Nine of Serverless

Let’s say you need to add some scheduling to your existing application. Perhaps you need to bring together an existing code written in different languages? Maybe you need a functionality that, although it’s quite important, won’t be used so often – thus you don’t want to pay for the whole infrastructure just for this little piece. Or maybe you just need to prototype and release a feature as quickest and cheapest as possible?

Here come Azure Functions to save the day!

What are Azure Functions?

Azure Functions are serverless computing solutions created by Microsoft, meaning that all the resources are managed by the cloud and a user only provides the logic that the application will execute. Such applications work in a pay per use pricing model which means the user is only charged for the actual usage of the application.

To operate, the function needs to be triggered by some event. It can be a schedule, HTTP request, or event from Azure Event Grid. Azure Function can also react to updates in storages, such as Blob or Cosmos DB. Access to the function can be anonymous as well as authorized, both by simple token and more sophisticated solutions like Azure AD, Facebook, Google, or Twitter authentication.

Azure Functions support many languages. At the moment of writing of this article – C#, Java, JavaScript, Python, F#, Typescript, and PowerShell. These are very popular languages and chances are high that you have a specialist in your team knowing one of them. 

This diversity also comes in handy when you need to add some part of the system using another language. No one will probably want to write their system in multiple languages (I hope). But having a choice is always a nice option and maybe useful – especially in cases when you have some legacy code and need to integrate it into the project that’s written in another language. Instead of rewriting the whole thing, you can use Azure Function!

Can Azure Function really reduce costs? Yes, it can! And notably, in certain cases. The user is only billed for the actual usage of the application. Also, you get a million (!) executions a month, for free. In many cases, this limit won’t even be exceeded. (User still needs to pay for the CPU and Memory usage, but only as much as actually have been used).

By combining many Azure Functions that react to different types of events, it is possible to create a whole serverless system consisting of many small parts and helping develop a project in the microservice approach. In more complicated cases, there might be a need for more advanced flow control. Azure Logic Apps can be used to achieve that, but it is a story for a whole different article.

Rise and shine… Not that I wish to imply that you have been sleeping on the job…

Now, wait a second! I knew I’ve heard it before! Amazon AWS has it for ages! It’s called Lambda, and it can do the same! Why do we need another one?

You’ve heard it right! AWS Lambda is also a serverless solution. Both Lambda and Azure Function share a similar purpose, yet there are some subtle differences you might want to consider before choosing one side for your next project:

  • They both support the majority of popular programming languages, but Lambda additionally allows you to write in Go and Ruby.
  • Azure Functions support F# and Typescript instead. Lambda functions need to be invoked via an HTTP request, while Microsoft solution, as already discussed, has a wide variety of possible triggers. 
  • Azure Functions are more flexible when it comes to pricing plans. You can choose a Consumption Plan, which is “pay for what you use”, but there is also a Premium plan which can give more performance, get rid of a cold start problem, and also increase the timeout of the function. 
  • Lambdas always run on Amazon Linux, whether in Azure Functions you can choose between Windows and Linux runtime.
  • Both solutions allow scalability and concurrent executions.

How do we use it?

In our projects, we have used Azure Functions to accomplish many tasks. Let’s share the most interesting ones!

Integrating with legacy code

Our whole application had been written in .NET Core but it was part of a bigger and older system. At one point, we had to use an already written and well-tested library, applied in other parts of the system. The problem was, it has been written in Python (not that we have something against Python, don’t get us wrong). We could have, of course, rewrite it from scratch but it would require time, both for developing and testing the solution. Instead, we used the Azure Function that let us execute that code without needing to worry about deployment and infrastructure and easily integrate it with our application using HTTP request just like we would integrate with regular REST API.

It is also worth mentioning that Azure Function in version 1 supports full .NET Framework, and versions 2 and newer .NET Core. This helped us solve a very similar case with legacy code which was written in an older version of .NET Framework. Azure Function worked as a proxy, solving all compatibility issues.

Scheduled actions

This is a pretty common scenario. Action that needs to be executed according to some schedule. It requires initial configuration and orchestration. With Azure Functions all these problems are taken care of. All that’s needed is setting a schedule which is as simple as modifying the configuration in the panel.

The history of each run can be viewed in the panel with a result, errors, and duration.

Generating thumbnails

Users can upload photos that later can be viewed as a list. We wanted to generate thumbnails for each uploaded photo, but we didn’t want users to wait for them to be generated. It was a very simple system, with no true asynchronous operations, service bus, and so on. Doing that only for the sake of these operations would be too much. 

We used Blob storage triggered Azure Function which was looking for newly uploaded photos, and generated a thumbnail for each.

Downsides

Seeing all those amazing advantages offered byAzure Functions, it might be tempting to use it to deliver every part of the project. And it doesn’t necessarily need to be a good idea. 

There are a few things that need to be taken into consideration.

Usually, for serverless functions, it takes more time to start up. It’s called “cold start” and means that the first run of the application within some imposed time takes longer than normal since the application is getting its resources ready. However, it can be eliminated for example by using a higher service plan.

Functions run in the cloud, which in reality is just someone else’s computer. There are certain cases when you may not want to store your code and data on other computers or servers than yours. Example – some business agreements or licenses. Running an app in the cloud is supposed to be easier, since you don’t need to worry about infrastructure, but that also means you don’t control it. There can be some cases when that fact can become a problem – just like a need for using different runtime or some external libraries.

Azure Functions are meant to run relatively short tasks up to about 10 minutes. With an HTTP trigger, the timeout is set to 230 seconds (who would want to wait any longer for an HTTP request to complete anyway, let’s be honest…). 

But there are cases where an operation will take longer than that. A solution to this problem might be Azure Durable Function which has been designed as an orchestration solution. Such a function can run virtually forever. Durable functions deserve a separate article, so we’ll leave it for now.

Conclusion

We hope you’ve enjoyed our journey into the world of serverless. We encourage you to use Azure Functions in your next project, whether you’ll need scheduled tasks, integrations with other languages, quick prototyping, or maybe a way to cut the costs.


We have delivered many projects using that technology and are more than happy to help you! Feel free to contact us – together we can deliver another great project with serverless technology.

Analytics in UX design

While launching a UX process, we undertake the goal of finding a perfect solution for a certain problem. We use varied methods to understand the business assumptions better, get to know the users, recognize the environment for the product or service, the competition, and technological capacities. We apply cognitive methods which give us qualitative insights. We observe the users, conduct interviews, build personas, consider information architecture, perform different professional and heuristic analysis, build prototypes, and test them.

However, a perfect supplement of qualitative data are quantitative ones, too – used as soon as the concept part of the process starts. The word “analytics” is usually associated with charts full of digits that can only make us dizzy. Yet, first of all, there’s nothing to be afraid of. And secondly, analytics covers so much more than the charts :).

What does analytics give us?

Analytical data reduce the counter-productive discussions among a project team and with a client. And it’s not only a matter of what somebody feels. The data can show if the feeling is valid in a larger group of users or if it concerns only individuals. Checking the numbers, we’ll see that, e.g. banner A converts great, whereas banner B not so much. As a result, the arguments for version B are effectively brought down. 

Analytics allows us to get to know our users better. We can verify the declared data based on behavioral ones which helps verify the information acquired in interviews with users. It’s quite common that people want to come out well embellishing their responses or they misrepresent the facts unconsciously, e.g. asked if they would be eager to use a Help tab in the service, they answer that they always do. And checking the quantitative data, we see that nobody has ever clicked the tab although it has been properly prepared. That’s why analytics helps track down all the imprecisions and build personas reflecting real users in the most accurate way.

Implementing analytics allows following various actions abreast, most commonly live. We can literally track the user through all the elements of interaction with our product and find out what is considered as a great solution and what bothers them the most.

We can control the source of traffic and analyze it for planning future actions in both developing the product and the marketing strategy. Access to analytics lets us verify all the implemented changes.

Every business owner wants to know what their competition’s up to. How to be a step ahead of it? What helps is the analytics connected with SEO operations.

Analytics doesn’t have to be linked to the users’ behaviour. There are also tests, e.g. performance, system’s speed, errors, restarts, and many other varied events that show the technical condition of our product. 

When and how to implement analytics in a UX process?

The scope of quantitative research should be established at the very beginning of the UX design process. After analysing the first material, we can determine areas which enable collecting quantitative data and provide value. First phase can include surveys and opinion polls which engage the user directly. 

During the phase of prototyping and implementing new solutions, it’s worth analysing how to fulfill the established goals and whether they are even pursued. 

After introducing the product into the product environment, we get many answers – how it works in comparison to the competition, how it functions over time, who the users are and what they do, and most importantly, if they fulfill the goals, and if not – why. There are many possibilities to analyze all these matters and, depending on the need, it’s worth choosing the most appropriate tool, e.g. heatmaps, clickmaps, session recordings, A/B tests, and analytical tools.

Can analytics be a threat?

It seems that analytical data that show real values provide us with only positive experience. Nevertheless they can interpose consternations, too. 

Let’s assume that a team worked 100 hours in a project hoping to enhance a functionality or a user’s path and… nothing changed. Data showed that the number of conversions didn’t rise and actually nothing happened. What then? Who’ll answer for the lack of results?

Analytical data don’t reflect good will or expectations. They provide information about what’s happening with the product. If our team and clients are not ready for failures, analytical data can cause unpleasant situations.

Summing up

It’s surely worth implementing quantitative research in a business process. Their scope should be adjusted to the project and its environment. The advantages are: a quick data analysis, low individual costs, short period of implementation, a possibility of analyzing the visual materials, and ensuring anonymity to research participants. 

We don’t have to fear the numbers. They don’t oppose the UX process – they serve it. Combining qualitative and quantitative research gives us a broader spectrum of knowledge, the highest project value, and, in consequence, a valuable product. 

Beautiful UI Tests with Kaspresso

Idea of UI tests

Many companies around the world employ manual testing for applications and they have their reasons. The advantages are multiplicitous, including a personal contact, understanding of the priorities, and inquisitiveness. 

What’s more, UI tests in a long-term project may improve some aspects of QA which testers are struggling with.

I have listed some more or less common problems with manual testing:

  • Doing it repetitively is tedious and boring
  • Vulnerable to human’s misunderstanding and oversight
  • Test cases are not portable
  • It isn’t documenting itself
  • Requires many different devices with many different OS versions

Manual testing is great, especially when you want to publish your digital product quickly. After that, most crucial screens of your app should be tested from code in order to avoid regression and omission.

Possibilities on Android platform

Nowadays, Android is a mature platform with many developed solutions. There are a lot of tools for UI testing, too: Espresso, ActivityScenario, Kakao, Kaspresso, Robolectric, Barista, Spoon, Robotium, Firebase Test Lab. But which should I choose?

The right library should allow us to write tests that are:

  • Understandable for both developers and testers
  • Following step by step principle
  • Focused to check only important parts

It turns out that Kaspresso takes advantage of Kotlin and fulfills our requirements with some extra features. Let’s find out what it really can do for us!

Kaspresso is a Kaspersky’s library built on top of the Kakao. Thanks to its Kotlin DSL nature and Page Object Pattern, code has context and is pretty readable. 

Official repository is stored at Github.

Code that tells the truth

In a sample app I have created a simple test case in which clicking on the button changes the text on the screen. Pretty basic example, but it will show how clean is the syntax of Kaspresso.

And the test code is as simple as the action. No more verbose code that could be unobvious for most of the testers. Now, once written code is a documentation of the screen for every person in the project. 

This is especially useful when the screen has many hidden actions and testers are not always able to reproduce all the test paths.

Step by step

Sometimes there is a situation when the final result depends on the actions being taken before.

Given that case, it would be awesome to describe larger tests in self-explanatory blocks. Of course, Kaspresso has a great step() function for that. 

Moreover, this library simplifies the configuration before and after the given test. It is also worth mentioning that created steps will be logically printed in the console which increases the control over running tasks.

Additionally, you can exclude any repetitive fragment of test in the simple scenario. This is accomplished by overriding the Scenario abstract class. In the example above I have extracted buttons visibility checking.

Work with Custom View

Next big advantage is that you can easily create your own view assertions, for example your custom view with an uncommon result.

This screen shows its own implementation of a rotating image view. Unfortunately Espresso is a base for the Kakao and therefore Kaspresso doesn’t have an assertion for a rotated view. A workaround for that is to extend BaseAssertion interface.

Creating assertions vastly improves the readability of the tests and allows encapsulating more complicated checks into the dedicated interfaces. Afterwards it can be almost self-describing.

Under the hood

In the code samples I have used a Screen object. This object is the lowest layer of Kaspresso abstraction. In the end we need to use Espresso to point out which view should be referenced to.

In the code above only withId() is the Espresso view’s pointer. This pattern is really handy as it just describes the screen. No Activity, no Fragment, no Dialog, it is not important for a test and a tester what type of class a developer has used. 

As the Screen object doesn’t have to correspond to a single physical window, you can encapsulate any parts of your view into single files. Creating logical configurations may be also quite helpful, for example HomeScreenWithPremium or HomeScreenBasic. 

Summary

Having a strong code base of UI tests is a big advantage. You can upload your scenarios into the Firebase Test Cloud or run them on the physical machine any time. This may happen when your testers are busy or you want to restrain regression.

You can achieve most of the syntax just with the Kakao library, but I had a reason to present Kaspresso. It does many other things that I didn’t mention:

  • Advanced interception of test actions
  • Clean and readable logging
  • Ability to call ADB commands
  • Perfect integration with Kautomator

With tools like that, writing UI tests is so clean that I believe it’s an added value for both developers and testers.

If you are looking for full code, check our repository with usage of ActivityScenario, Kakao and Kaspresso.

Why fintech rocks in remote customer service

Fintech is an industry that has always been a pioneer when it comes to technological innovation. It helped them to survive and thrive during a COVID-19 crisis because they were prepared for fully-remote operations with mobile applications, excellent web platforms, and other digital services that allow their clients to do almost everything without leaving their homes. What is the secret of fintech and what can be expected in the upcoming years?

If you are not using your bank’s mobile app, you probably waste a lot of time standing in lines and visiting on-site facilities. Since fintech started its digital transformation, all financial affairs could be easily resolved with online tools. Most of the bank clients do not need to visit their providers during the year. They can transfer money, apply for loans, and contact clerks conveniently with their smartphone, tablet, or laptop. And if there’s a need to meet up with the bank’s representative – they can book an appointment with an app and avoid unnecessary contact with people which is crucial during a pandemic.

Credit Agricole went one step further and used Booksy – a very popular app for scheduling hairdresser and beautician visits – to allow their clients to book meetings with their clerks. This proves that fintech is able to penetrate other sectors using unorthodox ideas.

Fintech innovations

What is particularly worth mentioning is the fact that fintech is one of the most open industries when it comes to introducing modern solutions. They are not afraid of educating their clients and giving them something new to learn – apparently, instead of scaring them off, they help them understand digitalization and invite them to other parts of their lives.

One of the innovations fintech embraces with courage is artificial intelligence. It can come in many shapes and forms – machine learning, voice bots and other advanced algorithms that can improve some operations and automate a lot of processes. In many situations, robots replace people as consultants, advisors, and client support.

Examples? Here you go:

In Poland, many well-known banks and financial institutions invest in innovation and introduce modern solutions for their clients. PKO BP, one of the biggest Polish banks recently started to develop the artificial intelligence department to research how it can be used to deliver even better services.

Santander bank released a customer support chatbot that helps its clients solve problematic cases. Since June 2020, it answered almost 10k questions during more than 2000 conversations. It was a great step to share this tool with clients when we are still not sure about the pandemic and how it will affect us in the near and far future.

Recently, ING created a robot that advises clients who want to invest their money. The algorithm can predict the levels of risks that can be accepted by a particular person and help them decide which investment funds are worth it.

The number of startups that open every year (or should we say – month!) around the world, especially in the US, is enormous. Not only banks but financial companies conquer the world of fintech and race each other to provide more innovative services available in our pockets. A few clicks and we can invest, transfer money, pay for online shopping and exchange currencies. They use the mentioned AI, machine learning, and other superb technologies, but first and foremost – they make our lives easier.

Convenient financial operations from any location that only require a smartphone? That’s why so many of them become unicorns – companies worth over one billion dollars. They often offer services similar to banks but without so many formalities and requirements. During a coronavirus pandemic, such brands were appreciated even more because they were accessible and easy-to-use even for the less tech-savvy consumers.

What can we expect from fintech in the future?

Right now, we can see that fintech revolutionized the digital market of innovation. This industry is always one step ahead of others. Probably it will still be that way, as banks and prominent startups have all the resources to expand in terms of technologies, solutions, and research. 

Visa, one of the biggest debit and credit card merchant, decided to invest a lot in fintech businesses in 2020. They believe that such companies are going to be one of the crucial elements in the current economy. In January, they merged with Plaid – a network that allows clients to securely connect their bank accounts with financial management apps. This way, fintech companies will be able to offer their innovative services to customers who use such applications.

Collaboration between banks, fintech startups, and financial institutions is essential to provide a new quality of digital money services.

Why was fintech so well prepared when COVID-19 happened?

First of all, fintech was one of the rare industries that wanted to eliminate a need for face-to-face contact in their services. And they started to do that years before the pandemic. When the coronavirus outbroke, they just needed to adjust a few things instead of preparing new systems and developing apps in a hurry. Banks have offered online consultations and digital solutions for a couple of years now.

Actually, remote customer service was one of the main focuses of technological transformation. Not only because it was convenient for customers and saved their time. It was also beneficial for banks and companies themselves. They could save resources needed for on-site client support and with the introduction of chatbots and voice bots, savings became enormous. No need for hiring employees, renting offices, and expensive training, but the possibility to talk to thousands of clients at the same time is something that wasn’t achievable before.

Right now, other industries are inspired by fintech and implement similar solutions. Coronavirus boosted this process even more.

Summary

We admire fintech for being so open-minded and progressive. They can be a role model for many other sectors. And we can help them to digitize and transform their actions with robust software products.

Whether you represent a fintech company or any other industry, contact us. We develop modern web and mobile apps but also help with the implementation of innovative solutions like voice bots and artificial intelligence. Let’s see what we can do for you.

API tests for beginners

Software tester’s job differs depending on a multitude of factors. There are times when the number of scheduled tasks and tests is extremely high. In such a situation it’s worth searching for new solutions which allow increasing work’s productivity and assure its high quality at the same time. Using various types of improvements, e.g. uploading data with API definitely helps. It saves time allowing a one-time use of many input datasets. Doing it manually would significantly extend the time of testing.

1. WebApi, API, and REST API

However, what’s more essential about API’s functions is using it for tests. It allows checking the validation’s correctness or taking a peek at the backend values which aren’t visible from the frontend side. Before starting API tests it’s worth studying its basic terms and understanding what it is, especially considering WebApi. 

WebApi is a communications interface using HTTP protocol and XML or JSON format to send data. It allows access to resources and a quick information exchange with output servers without necessity to generate interfaces, because it returns only data in chosen format. 

API, Application Programming Interface, is a set of rules enabling sending data among applications. Many net services offer public API for sending and receiving content from a certain service. API available via the internet and using URL addresses starting with http:// is a net API. Downloading and publishing information on the internet requires so-called requests. 

REST, Representational State Transfer, is a standard which establishes the rules of designing API. REST service is based on a HTTP protocol and provides a set of some well received practices that work like a road-sign. It’s a very elastic solution which allows handling many kinds of connections and returning a few types of data. It cooperates with JSON, XML, or YAML. Making a connection, REST API should depend on data provided together with connections such as: user’s ID, token, or API keys. 

2. TESTING API

According to ISTQB dictionary, testing API is testing code which is responsible for communication between various processes, programmes, and/or systems. It often consists of negative testing to check the resistance to error handling. 

It’s possible to use many tools while testing programmable interfaces. Some of them are mainly for performance tests, some for automatic ones, and others for functional testing. You’ll find a few of them below:

  • SoapUI – functional testing of REST and SOAP web services 
  • Postman – web service REST testing
  • Tricentis Tosca – functional GUI testing and test for API protocols
  • JMeter – performance tests and functional API testing for, e.g. REST and SOAP web services
  • Swagger – inbuilt functionalities allow documenting, standardizing, and testing programmable interfaces
  • Fidler – a tool for debugging internet communication with a vast tester purpose and a possibility to test API and provide performance tests, too
  • Insomnia
  • Rest-Assured
  • Katalon-Studio

All the tools allow testing and documenting API.

3. HTTP communication

HTTP protocol contains methods which allow WebApi manipulation. HTTP communication is realized by sending a request to the server which generates a response. An example can be seen on illustration no 1. 

Listing 1. HTTP communication based on https://iteo.com/blog/

Illustration no 1. HTTP communication [own coverage]

Let’s analyze the request and response from Illustration no 1. The above request contains:

  • HTTP method (GET method in this case) – with its help we determine what kind of operation will be made on a certain resource;
  • URL address (in this case https://iteo.com/blog/) – it’s a so called endpoint, an address to which a request is sent;
  • Protocol version (in this case: HTTP/1.1);
  • Request headers – a location for placing additional information using e.g. a key – value pair. Kinds of headers are determined in advance and the list can be found here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept;
  • Request body – a location for placing additional information used while editing/creating resources on the server. In this particular case the body is empty.

Just as a request, a response contains headers and body, too, but it’s preceded by a response code. 

Listing 2. Server’s response for a sent request. 

Server’s response for a sent request contains:

  • Response status line: HTTP/1.1 200 OK,
  • Further response headers, e.g. Server: nginx/1.16.1,
  • Empty line
  • Response’s body

4. The most frequent methods of HTTP

  • GET – downloading data from server
  • POST – placing new data on server
  • PUT – update/modification of existing data
  • DELETE – removing data from server

5. Creating requests

By the example of Wage platform https://wageapp.io/ and using https://insomnia.rest/ we’ll discuss all of the most frequently used methods.

The GET method helps download a list of available offer categories.

Illustration no 2 shows a printscreen of a program mentioned above.

Illustration no 2. GET request with help of Insomnia tools

First, we have to choose one of the available methods (GET) and enter an adequate URL address which is a path determining where to find certain resources on the server. In a discussed example the endpoint is {{base_url}}/api/offers/categories where {{base_url}} is an environment variable defined earlier in Manage Environments. Variables enable defining values as environment variables once and referring to this value when it’s needed.  Later on, we will use this particular variable. There are two basic methods of conveying parameters. One is conveying them as an URL element using the GET method. The second one is conveying them as a content of the body’s request using POST which will be described in the forthcoming part of the article. 

After sending the request presented in Illustration no 2, a whole list of available offers is being downloaded. Undermentioned code from Listing 3 shows a fragment of HTTP response for a sent request.

Listing 3. Fragment of HTTP response – GET method

Response content from the server is saved in JSON format and its structure consists of a key – value pair which is separated by a colon (key : value). In our example the main elements are categories. In JSON format, every definition of an object/element which always consists of more than one field is enfolded by braces. Board elements are put in brackets. The first object of our categories is a category with an ID equal 12, name: “Babysitting”, description ““, and an adequate iconUrl. Apart from the response content, using Insomnia we can read other interesting values. As shown in Illustration no 2, code status is “200 OK” which means confirmation. We can analyze other parameters such as: time, size, headers, cookie, and timeline – an axis helpful in the process of debugging. 

The POST method allows sending data to the server and adding a new job offer on a tested website. It’s necessary to fill in an adequate URL address: {{base_url}}/api/offers and send a request. This method enables conveying parameters in the body. Discussed case consists, among others, of parameters such as: type of offer, title, description, price, etc. Entered HTTP request is presented in Listing 4, and Listing 5 show the server’s response. Similarly to the previous case, response status is “200 OK” so the request has been passed successfully. 

Listing 4. HTTP request – POST method

Listing 5.  A fragment of HTTP response – POST method

In response, we got a unique ID with 1997 value. Thanks to this value we will be able to edit the offer in the next step.

PUT method updates data sent to the server. The difference between POST and PUT is that when the PUT method is called 10 times with the same headers and body, the database will always contain only one resource. Whereas using the POST method creates 10 resources with various IDs. It can update an offer added previously from Listing 4 with ID equal 1997. The endpoint {{base_url}}/api/offers/{offerid} should be supplemented by previously chosen offerid. After modification, the address is: {{base_url  }}/api/offers/1997. Now we can properly modify the request body. As an example, we changed two values: the offer’s title and price. The request’s content after modification is shown in Listing 6.

Listing 6. HTTP request – PUT method

As a response we got “status 200” and response body as in Listing 7.

Listing 7.  HTTP response – PUT method

The offer has been successfully updated. The last method to discuss is the DELETE method. It allows deleting previously added resources from the server. It deletes a logged in user avatar. To make it happen, we should enter an endpoint {{base_url}}l/api/account/avatar and send a request. As a response we got “status 200” and the response body as shown in Listing 7.

Listing 8. HTTP response – DELETE method

As you can see also in this case the offer has been successfully deleted. 

Few words about testing itself – what we managed to do

Just like learning any new thing, at first API testing can seem a bit difficult. The fact is that it’s much easier than we may think. Summing up, a typical path for every request can be presented using Gherkin syntax which is  a language for creating test cases of characteristic construction and uncomplicated syntax. Typical scenario includes the following elements – keywords: Feature, Scenario, Given, When, Then, And. In the discussed example, first we chose an endpoint – a location with input data (Given). Then, we performed varied operations using the data (When). In the end we checked if all the actions were fulfilled according to the requirements. 

The end

Acquiring the skill of testing API can be highly beneficial, and sometimes even indispensable while working on bigger and more demanding projects. Performing such tests helps verifying a service operation in separation from the visual layer of an app. It allows defining the scope of tests. We can say that it doesn’t test only the part visible for a user, but also the whole tech supply base of a website, as well as net and system services. Using API methods, we can expedite some of the subsidiary tasks such as creating test data which allows their quicker rendering. While performing API smoke tests we can easily make sure that everything works as planned and pass feedback to a developers team. We can also perform some additional operations to create test data and scenarios. Nevertheless, the main asset is that during API tests we can peek at the values which are not visible from the frontend side. Thus, all the technical supply base of the website as well as net and system services are tested. 

E-commerce success criteria for your consideration

1. Business plan – when should we expect the famous ROI?

When it comes to founding an online store, you have to ask yourself a fundamental question – how much should I invest and when will the store start to profit? But what does “profit” even mean? Will it be 20 orders after two months of being online? Or maybe a sale worth 5 thousand zlotys? None of these matter, if you haven’t determined the measure of your endeavour’s success at the very beginning. 

A well prepared business plan is a true must-have. Its key elements include:

  • Time it will take to found an online store
  • Time of signing a contract with suppliers and an online sales agent
  • Question if a store can be stocked and the sale can start off straight away

You may be prone to a disaster if you choose an irrelevant product, neglect identifying your competition, forgo any marketing campaign, design and create a store incorrectly, or inaccurately define your target audience. 

You should consider all the costs connected with founding an online store: buying a domain, store’s software, elements of design such as logotype, artwork, and banners, marketing expenses, and other possible additives. And remember to confront them all with your competition. The larger it is, the more difficult it will be to stand out and all the initial costs will automatically increase. 

The process of purchasing goods and acquiring information must be intuitive – a user shouldn’t have any problems ordering a product and getting information about its delivery time. If you know your customers well and provide stock that meets their expectations quality- and price-wise, you’ve got nothing left to do than to invite them for shopping. That’s where all the marketing fun starts.

If two weeks passed and you didn’t sell a single product, you should be concerned and analyze your store and its promotion. If after 2-3 months (depending on an industry, store’s size, or offer’s itinerancy) the store still doesn’t increase your planned profit, you should analyze it, as well. 

2. Marketing

It’s a must to include a marketing budget in your business plan. Even the best store won’t achieve great results, if it’s not regularly visited. You can’t believe in nonsense such as “I own a store and the rest will come automatically”. It’s a simple way to ruin all your plans. These competitive times force you to encourage potential customers to visit your site.

Marketing activities have to be adequately targeted. If you own a fishing store, displaying ads with the newest corks to fledgeling moms is useless. It’s worth checking which sites are visited by your potential customers and adjust the ads properly. They shouldn’t be identical for both teenagers and pensioners. The places you use to advertise your store are equally essential. You can choose Facebook, Google Ads, or a spot on a thematic forum. 

3. Analytics

Analysis can provide you with valid information about who your customer is, what sells best, where your users come from, or what group purchases the most or the least.  How to obtain such knowledge? Many stores have their own analytic modules which present many information about a store’s activity very clearly, including sales statistics and warehouse management. However, if you want to know your customers well, it’s a good idea to use some external tools, too. A great example is Google Analytics. It allows you to analyze the users and their behaviour in the store which helps eliminate mistakes and organize marketing campaigns more effectively. It can also minimize the costs of conducting a store and increase its incomes. 

Want to gain more knowledge about e-commerce? Check other articles from this section.

Apple Watch – a cool and useful gadget, but can It be more?

What are the benefits of watch app development? Let’s take a look at some features:

  • you can use the newest solutions provided by Swift and App
  • SwiftUI (it is supported on all Apple Watch with watchOS 6 installed) – It is a brand new Apple UI framework for creating fast and reliable user interface
  • Watch Connectivity framework – It is a communication library that offers developers easy way for implementing different forms of communication on Apple Watch
  • Location services (Since second generation released in 2016 every Apple Watch have GPS and GLONASS)
  • Accelerometer and Gyroscope
  • Bluetooth

It allows us to create an almost independent app. We can make a sport tracker app that communicates with iOS app internally and provides it with additional health data that supplement the bluetooth sensors. Then we can provide more accurate data and make more interesting calculations to examine efficiency of the workouts. The watch itself can communicate with bluetooth sensors and, if in that case we are speaking of water sports, we do not want to take our iPhone to the swimming pool very often. We can make an app that gathers all the data from sensors in Apple Watch while still providing additional health data. Then, sync this data with iOS app to make analysis and provide feedback for the user.

We can take advantage of everything that Apple Watch offers: it can monitor your sleep cycle, track your movement, and take specific actions or gather specific data. Based on our location, we can check if we would make it to the bus stop on time. There are many possibilities.

Let’s go technical

Development for Apple Watch is almost identical to iOS application. You write the code the same way as for the iPhone. Debugging and testing can take some time when dealing with cross device applications since (for now) we can’t simultaneously connect both iOS and Watch app to the one debugger (one of the apps has to be manually attached  from a separate process). But other than that it is just a normal development. There is a guideline for most efficient UI, tips and specifications on some Watch particular mechanics, and somehow clearly stated limitations.

When developing an iOS app, even on a planning process we can determine if a given feature would be shared with the Watch app.

We can do this on many levels. By developing a separate app or by extracting code used by both iOS and Watch app. That’s where planning shared features becomes so important. When dealing with bluetooth – we can extract all logic to separate framework and then early include it in the main iOS application. It will also make the code easier to maintain. Then, when the time comes, it will be much cheaper to develop a Watch version of the app. Almost every data can be shared between Apple Watch and iOS Application via secure container. By default, communication between Apple Watch and iPhone is encrypted so we do not have to worry that our data falls into the wrong hands. Apple takes security and privacy very seriously.  Besides internal communication and Bluetooth service, Apple Watch offers the same networking as for normal iOS applications. You can make the same HTTPS requests and get the same data using Watch to keep your app updated.

ios and apple watch app diagram

Limitations

HTTPS requests can only be sent when we have a connection with the internet. This could be achieved by connecting the Watch to a known WiFi network, being in a close proximity to paired iPhone with mobile data turned on, or by using cellular version of the Watch.

Some actions, like refreshing complications (custom Watch Face widgets), have a certain limit for daily refresh rates. This has been implemented to extend the battery life by preventing developers from running complex algorithms in the background.

App authorization is a bit tricky – Apple Watch does not provide a normal “qwerty” keyboard so any actions that require text input from the user have to be done by dictation or by scribbling a message.

Taking all features and limitations into consideration – Apple Watch is a great device for gathering all kinds of health and movement data. Furthermore, it can do almost everything that an iOS app would do – we just need to make it a little bit “smarter”. The only thing is to make good use of the data and in-built abilities of the Apple Watch. 

How did GraphQL let us overtake the REST

Would you like to ascend your development process to the next level? Is GraphQL another buzzword or will it replace REST like REST replaced SOAP?

What GraphQL is.

The most important thing to mention is that GraphQL is not another framework! It’s a query language and execution engine created in 2012 by Facebook. GraphQL’s specification (in short: description of behaviors, data format, etc.) became an open standard in 2015.

One can even boldly say that this is the way we think of creating applications.

Making it simple, a typical GraphQL API consists of the following components:

  • Schema – core, it describes types and capabilities of API
  • Query – root type for queries (data fetching)
  • Mutation – root type for mutations (modifying data)

We are constantly using GraphQL on both client and server sides.

For .NET projects, we use HotChocolate (https://hotchocolate.io/) framework with all .NET features cooperating greatly with the GraphQL.

For node and frontend projects – we get well along with Apollo (https://apollographql.com/).

Top reasons to use it in your next project

Strongly typed schema

For me, a long-time C# veteran, the coolest thing is a type system. It’s a lot less error-prone compared to the classic REST approach, and fewer bugs means less time spent on fixing them.

API straightforwardly defines allowed operations and data models.

The type system allows us to generate some parts of code, letting the development team focus on your specific problems (not infrastructure ones), and further increase development speed.

What you query for is what you get

Let’s assume that we are creating another blogging application. Imagine that we have a page which displays multiple blog entries and the respective author’s name.

In classic REST API we would have some set of endpoints, e.g.:

GET posts

GET author?{id}

Frontend will request for a list of posts and for each post’s author respectively.

In this case, the so-called underfetching occurred. It means that one endpoint cannot supply us with the required amount of data and it leads to performing additional, unnecessary requests (in our case – querying for author’s name).

However, at the same time, the so-called overfetching occurred because we have transferred all the author’s data, including those currently redundant (the author may have such things as a photo, description, etc.)

graph-ql-why-use-it

What could a similar graphql query look like?

  query GetPostsWithAuthorNames {
    posts {
      title
      content
      author {
        name
      }
    }
  }

We are explicitly declaring what we want to receive from API.

graph ql authors name

Better overall performance

Performing one round trip to a server, in most scenarios, is much more performant than doing multiple requests. Not to mention that less data is transferred over the network. 

In REST api you could have something like user/basicdata and user/details to preserve some transfer but due to REST API nature – it leads to code bloating.

Hosting GraphQL API in the cloud might be cheaper than its REST equivalent!

Versioning

In REST APIs versioning is based on duplicating the code and endpoints (v1/users, v2/users etc.).

GraphQL API is backward compatible. You can freely expand it as you wish because API consumers can query what they would like to get.

It requires a lot less effort to evolve your GraphQL API which basically means a faster development process.

And after some time, you can freely drop the deprecated field.

Documentation

Thanks to the strongly typed schema, we are able to generate documentations with “one click”.

Existence of tools like GraphiQL or GraphQL Playground grants us a possibility to write queries with autocomplete help, view schema, and docs.

On the other hand, GraphQL Voyager generates a beautiful schema visualization/map.

Simpler and more beautiful code

Code, on both server and client-side, is much more brief and simple which in consequence means it’s easier to maintain and modify. Simple code is also a morale booster for the development team.

…And there are no endpoints so frontend and backend devs aren’t fighting for API structure (that much). 

Flexibility

Still, remember the blog posts with the author’s name example? What if we would additionally display the author’s description and his few top-rated posts?

In GraphQL, we might be able to do this without touching a server-side, while in classic REST approach it would probably end with expanding one endpoint (which might be inappropriate) or adding another one.

Making changes is just easier and faster.

Let’s assume we would like to rebuild the whole app layout. Due to the flexibility of GraphQL and it’s queries – probably we might be able to avoid complex works on the server-side.

This basically means that frontend work can start right away!

Rapid MVP development

Taking into account the flexible nature of GraphQL, it fits well with rapid MVPs development, where many requirements may change during the work.

Microservices…

This is where GraphQL shines brightly! Thanks to so-called schema stitching (or Apollo Federation for node implementation) – we are able to simplify communication with server-side applications by joining services into one.

This basically means that the frontend apps are querying data from one API.

Schema stitching allows us to stitch a legacy API and hide it behind GraphQL schema too!

If you’re curious about other insights from our team of developers, check the Development section in our blog.