Explore our custom software pricing guide, where we outline indicative costs based on some common project types.

Software Development

Software Development: A Complete Guide

Almost every business exists to solve problems on behalf of customers. In many instances these days, the solutions take the form of software – be it a mobile app to facilitate parking or a web portal to advertise job vacancies. If you’re looking to develop a piece of software to help your customers, then it’d be a good idea to read this complete guide to software development first.

Software: a definition

Let’s kick things off with a definition; what exactly is software? 

In Principles of Information Systems by Ralph M. Stair, software is defined as ‘computer programs that govern the operation of the computer1.

That sounds fairly simple, doesn’t it? But, on closer inspection, the definition is somewhat more complex. 

Consider that software and computer hardware are inextricably linked. In fact, the very earliest electronic computers – such as the ENIAC (Electronic Numerical Integrator and Computer) – required software that was machine-specific. 

It wasn’t until the development of high-level programming languages in 1958 that software could be developed that could be used across different computer architectures. 

From these earliest days, software has become ever more complex. However, at the risk of oversimplifying things, software today can be roughly divided into two categories: 

  • Operating systems
  • Application software

Operating systems

You don’t have to be a computing professional to be familiar with the term operating system (typically shortened to the initialism ‘OS’). 

This refers to the piece of software that provides instructions to the computer’s hardware, scheduling tasks for the efficient use of the system as well as providing common services to applications. 

It’s likely you encounter multiple operating systems each day; spanning from Android on your smartphone to Microsoft Windows on your office workstation. 

The point here is that a) operating systems are a type of software, and b) operating systems themselves interact with and support individual pieces of application software on a computer. 

Application software

What, then, is application software? Perhaps the easiest way of defining application software is as software that performs specific tasks on behalf of computer users. What application software is not – is any type of software that operates or administers the computer hardware (for that is the job of the operating system). 

The easiest way to understand application software is to consider some examples. Word processors (such as Microsoft Word), or graphic art software (e.g. Adobe Photoshop) are examples of application software in common usage. 

Note – both operating systems and application software can be proprietary or open-source. The former term means that the OS or application software is owned by its creator or publisher with a legal monopoly to sell said OS or application. The latter term (open source) means that the owner of the software grants users the right to use the software in the manner of their choosing. 

How operating systems, applications and hardware interact in computers

SaaS: the evolution of software

Yes, those are the two predominant forms of software – however, since the early 2000s – the situation has changed with the emergence of cloud computing. 

Beginning with Amazon Web Services (AWS) in 2002, and followed by the Google Cloud Platform in 2008 and Microsoft Azure in 2010 (as well as a host of other cloud computing providers) a new paradigm emerged in which it became possible to provide software as a service (SaaS). 

But, what exactly is ‘the cloud’? And, how does it relate to software as a service? 

The International Standards Organisation (ISO) defines cloud computing as:

‘Cloud computing is a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand’2

In less technical terms, cloud computing effectively allows companies or individuals to launch and run applications without the need for expensive capital investments in servers and other hardware. Instead, you (typically) only pay for the cloud services you use – which can also be scaled up and down as per your resource requirements. 

The development of cloud computing into an affordable and widely accessible solution has paved the way for companies to provide software as a service (SaaS).

What is software as a service (SaaS)? 

For those of you old enough to remember, buying and using a piece of application software traditionally involved going to a computer shop and purchasing a physical CD-ROM. 

With a one-off payment you now owned a particular piece of software and (providing your personal computer had the appropriate specifications) you were free to use it to your heart’s content. 

The thing is, this method of software sales and distribution had a number of drawbacks. For one thing, consumers could often expect to pay a large upfront cost for the software (recall the days in which the Adobe suite would cost north of £500!). 

Furthermore, once installed, said software would be tied to a single computer. Want to log in to your software from another device? Tough luck. 

Finally, users would often be responsible for managing and installing any software-related security updates – something which was not only time-consuming, but which represented a security vulnerability if not completed in a timely fashion.

Thus, with the arrival of cloud computing a new alternative became viable. The ability to provide software – not in a physical form – but, over the Internet, utilising remote infrastructure and hardware to facilitate it. 

The first popular examples of SaaS began to appear in the early 2000s. Take Gmail, for example. Prior to this, if you’d wanted a piece of software to act as an email client on your computer, you’d have had to purchase a physical copy of Microsoft Outlook and install it. 

With the emergence of Gmail, you simply needed a web browser to access your emails. 

As history shows, this proved to be wildly successful (at the time of writing Gmail is the most popular email service in the world with an estimated 1.2 billion users). 

Today, SaaS has become the most popular and primary way in which companies deliver their applications to consumers3

The National Institute of Standards (NIST), defines SaaS as4

‘The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g. web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings’.

In contrast to traditional forms of software distribution, SaaS has a number of advantages for both consumers and the developers of the software alike. These include: 

  • No upfront licence fees. Instead, users pay a recurring fee. This not only makes the software more affordable for users but produces a more predictable cash flow for the developer/owner of the software. 
  • Accessibility. SaaS solutions can typically be accessed by any device with an Internet connection. As a result, users aren’t restricted to a single device. 
  • Security and updates: being Internet-connected, SaaS solutions will typically update automatically and implement security upgrades. This both saves users time and energy, and minimises opportunities for security breaches. 
  • API integrations: many of today’s SaaS solutions incorporate built-in APIs better allowing them to interact with other applications. 

So, as you can see, although there is a consensus that software can be divided into two ‘camps’, SaaS represents something of a divergence from application software. It’s software, but not as we who grew up in the 1990s knew it….

Read also: How to build a SaaS product: Step-by-step guide

A brief history of software

As we made mention of earlier, software stretches back to the earliest days of electronic computing, with the creation of software being made significantly easier with the development of high-level programming languages in the late 1950s. 

But, where does the term ‘software’ actually come from? 

As with the etymology of many words, the true origin of the word software isn’t entirely clear, however one historian5 credits its creation to John Wilder Tukey, an American mathematician and statistician in 1958 (as an aside, he’s also credited with popularising the term ‘bit’ in computing). 

However, the theory and concept of software had been coalescing long before it was formally named. 

It is widely agreed amongst historians that the first modern theory of software was proposed by Alan Turing. In 1935, Turing published his essay Computable numbers with an application to the Entscheidungsproblem. This essay proved to be so foundational to the field of computing that it is credited as the spark that eventually led to the creation of two academic fields; computer science, and software engineering. 

As with equivalently named fields, computer science deals with more theoretical concepts, whilst software engineering focuses more on practical applications. 

But, what broader impact did Turing’s contributions have to software development?

Turing analysed what it meant for a human to follow a definitive method or procedure to follow a task. From this analysis, the English mathematician invented the idea of a ‘Universal Machine’ that could decode and perform any set of instructions. 

In short – Turing was arguably the first person to intellectually conceptualise a computer (in the modern sense). By 1946 he presented a paper (on 19th February 1946 to be precise) which detailed the design of the first stored-program computer. This paper was presented whilst Turing worked on the design of the ACE (Automatic Computing Engine) at the National Physical Laboratory (NPL) in London.

It was this 1946 paper that eventually led to the creation of the Pilot ACE – which executed its first program on 10th May 1950.

How is software actually developed? Software development and maintenance

Okay, by this point, you should hopefully have a clear grasp as to what software is. The next step is to understand how a piece of software (or OS) goes from an idea to a tangible, usable product. 

A note on software engineering

The first point to consider is that software development sits beneath the broader umbrella of software engineering (which also incorporates organisational management, project management, configuration management and other elements). 

The Bureau of Labor Statistics defines software engineering as6

‘The systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software’. 

As we saw earlier, the field of software engineering emerged in the early-to-mid twentieth century. The first formal software engineering conference is widely considered to be a 1968 conference organised by NATO (the North Atlantic Treaty Organisation)7. It was at this conference that the first guidelines and best practices for software development were established. 

Fast-forward to today and software engineering is an established discipline with the generally accepted best practices in the field having been collected in the Software Engineering Body of Knowledge (SWEBOK)8

The core pillars of software development

Software development, then, sits beneath the broader umbrella of software engineering. But, in turn, a number of core pillars sit below the term software development. 

As you can imagine, the development of a piece of software is a many sided task, and thus involves a plethora of tools, individual skill sets and methodologies. We’ll outline these below. 

The people involved in a software development project

When you choose a company like GoodCore to develop a piece of software for you, you’ll find that a veritable army of people will spring into action to bring your software to life. 

These people will include: 

  • Programmers – also known as developers, these are the people who write the code that forms the basis of your software. However, given the complexity of many modern software projects, there are different types of programmers: 
    • Full-stack developers – individuals who handle both the front-end and back-end development tasks. 
    • Front-end developers – individuals who focus only on those parts of a software with which users will interact (e.g. user interface and user experience). 
    • Back-end developers – programmers who manage the server-side and database operations of a piece of software. 
    • Mobile app developers – developers who specialise in developing software for mobile devices. 
    • Data scientists – individuals who use algorithms and machine learning to interpret complex data sets. 
  • Business analyst – it’s the job of a business analyst to fully understand the client’s needs and then translate them into requirements (that will be built into the resulting software). 
  • Project manager – the project manager will ensure that the software is delivered on time and within the client’s budget. The project manager plays an integral role in managing the programmers. 
  • Software architect – particularly complex software projects will often involve the services of a software architect. It’s their job to select the appropriate tools and platforms, ensure integrations work, and generally ensure the end software is stable and secure. 
  • UI/UX designers – UI/UX designers are responsible for creating user-friendly, attractive designs that make the software easy and intuitive to use. 
  • Quality assurance (QA) engineer – the QA engineer is there to verify that the software functions correctly and meets the requirements as set out by the business analyst and client.
  • DevOps engineer – acting as the conduit between the development and operations teams, striking a balance between the introduction of new features/changes and maintaining the stability of the software. 

For more information on software development team structure, check out our guide on building a software development team: key roles and responsibilities.

Just as the old saying ‘it takes a village to raise a child’ often rings true, the same can be said of software, with myriad people needed to create and maintain a quality piece of software. 

There are a number of other supporting disciplines that can fall under a number of job titles (or be carried out by the individuals listed above). These supporting disciplines include; configuration management, deployment management, documentation, and software quality assurance. 

Now, whilst these are the main roles involved in developing software, you’ll find that there are many other roles which sit on the periphery. These can include:

  • Marketing – promoting and advertising the software to grow its user base. 
  • Sales – engaging in activity to sell the software to a growing base of customers. 
  • Account management – account managers will typically be engaged in activity that helps to retain existing customers. Account managers will also usually have to upsell/cross-sell new products/features to existing customers.
  • Technical support – supporting customers who have purchased the software to get the most value out of it. 

In short, you can see that there are multiple people with niche skill sets who contribute to the development of software. 

However, hiring and onboarding all these people can incur significant expenditure and overheads, which is why many companies are now choosing to outsource their development to dedicated third-party teams.

What tools are used in software development? 

Depending on the software being developed, the software team will usually need to make use of a variety of specialist tools. These include: 

  • Compiler – a program that used to translate human-readable code into machine-readable instructions. Compilers also help to catch syntax and semantic errors before code is run. 
  • Integrated Development Environment (IDE) – these are programs in which all (or nearly all) development can be done. IDEs typically include features for authoring, modifying, compiling, deploying and debugging software. 
  • Project management tool – project management tools such as Jira are used to help development teams plan, track and manage their development project. 
  • Code repositories – code repositories, such as GitHub, allow developers to search for examples of code as well as manage their own code and effectively track changes etc. 

These are just a few examples of the many tools that developers may call upon during the course of a dev project. New tools are always being launched to market as well. For example, with the rise of cloud computing, a wealth of cloud tools have arrived. These include AWS Cloud9 which allow developers to use a cloud-based development environment (as opposed to having to set up a virtual environment on their own local system). 

Of course, some of these tools can be costly and/or complex to set up and maintain. It’s for those reasons (and many more) that a growing number of entrepreneurs, companies and organisations are choosing to outsource their software projects to dedicated development companies that can build the software on their behalf. 

Software development methodologies

We’ve now seen the constituent parts that go into a software development project (both human and inanimate). 

But, how do these parts interact and work together to create a finished product? The answer is varied and pivots upon the methodology employed. 

Methodological ‘morphology’

Before we look at the individual methodologies which can be deployed in what is known as the software development lifecycle, we’ll first look at the overarching ‘morphology’ that individual methodologies can fall within. 

Defined vs customised

Software development methodologies can follow either a defined or customised morphology. 

In defined morphologies the methodology will follow a formal, documented standard with little to no deviation from this standard throughout the lifetime of the project. 

In customised morphologies, the methodology will typically be customised to the project in question. In some instances, the methodology will demonstrate ‘emergence’ whereby the methodology changes as the aetiology and overall shape of the project becomes clearer. Think of it as the methodology ‘evolving’ as the overall ‘shape’ of the project becomes clearer over time. 

Sequential vs iterative

Software development methodologies can also be either sequential or iterative in nature. 

Sequential software development methodologies involve a series of steps. Each step (be it design, implementation or testing) must be completed before the project moves onto the next. 

On the other hand, iterative software development methodologies involve multiple aspects of the software developed simultaneously, where small aspects of the overall project are designed, implemented and tested. In many instances, this approach isn’t linear (meaning development work can occur on a ‘back and forth’ basis). 

Let’s get meta: meta models and view models

In addition to thinking about the morphology of a methodology, it’s also important to consider both meta models and view models. 

Let’s begin with meta models.

What are meta models? 

Meta models are – and bear with us here – are models of models. 

Think of it like this; you want to develop a piece of software. But, you aren’t exactly sure what specification it will take – or its exact IT requirements. 

A meta model, then, is an abstraction or representation of a piece of software to be developed. 

Meta modelling normally takes place in what’s known as a ‘modelling space’ and will use domain-specific programming languages (DSLs). Working together, the client and the developers will define the functions and requirements of the new software using the DSL. 

The modelling space itself will often consist of a number of layers built on each other. It will look something like this: 

  • M0 – the software that you would like to develop. In a meta scenario this is called ‘the reality’. 
  • M1 – the level of the meta model – which describes the software to be developed in abstract. 
  • M2 – the level of the meta model – this is the language that is necessary to create the meta model. This level uses a different level of abstraction in order to make comments about the meta model from a different point of view. 
  • M3 – the final layer – which is known as the Meta Object Facility (MOF). This is a standard type system for model-driven software engineering. 

In summary – meta models allow pieces of software to be developed independently over any supporting platform. It allows developers freedom to consider the ideal solution without concern to underlying dependencies. 

What are view models? 

A view model refers to a type of framework that provides a view between the context of user interfaces and the data and business logic. 

What does that mean in more simple terms? 

A view model effectively acts as an intermediary between the model (the plans for the software) and the view (the software itself). In other words, a view model takes potentially very complex plans for software and translates them into a medium that is easier for developers to understand. 

As you can imagine, view models are viewed as essential for especially large and complex software development projects. 

A brief history of software development methodologies

By the late-1960s and early-1970s computing had become an established field, with software finding applications in a multiplicity of commercial and governmental fields. 

During this period, however, the development of software occurred in a relatively haphazard fashion – without defined methodologies to guide the process. 

This resulted in what has now become popularly known as the ‘software development crisis’ or ‘software crisis’. 

The term first emerged at the 1968 NATO Software Engineering Crisis and was further shaped by Edsger Dijkstra in his 1972 Turing Award lecture. Dijkstra described the problem as follows: 

“The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem”9.

To put this more simply, computer programmers were taking longer and longer to develop new pieces of software. In fact, project timelines of three years or more were not uncommon. 

If the development of software were not to take ever longer (or even grind to a halt), a solution was needed. That solution took the form of defined software development methodologies that in a sense provided ‘guardrails’ to ensure development projects stayed on track. 

A timeline of software development methodologies

Like other aspects of software development, these methodologies developed and evolved over time. Below, you’ll find a timeline which sets out when each major methodology emerged:

  • 1968: Structured Programming Paradigm.
  • 1974: Cap Gemini System Development Methodology (SDM).
  • 1980: Structured Systems Analysis and Design Method (SSADM).
  • 1985: Soft Systems Methodology (SSM).
  • 1990: Rapid Application Development (RAD). 
  • 1992: Object-Oriented Programming (OOP).
  • 1994: Dynamic Systems Development Method (DSDM).
  • 1995: SCRUM Methodology. 
  • 1998: Rational Unified Process (RUP).
  • 1999: Extreme Programming (XP).
  • 1999: Unified Software Development Process (USDP).
  • 2005: Agile Unified Process (AUP).
  • 2008: Disciplined Agile Delivery (DAD).
  • 2011: Scaled Agile Framework (SAFe). 
  • 2013: Large-Scale-Scrum (LeSS).

An overview of the software development lifecycle (SDLC) process

Although, as you will soon read, there are many different types of development methodology, these sit under the broader umbrella of the ‘software development lifecycle process’. 

This process involves the following lifecycle phases: 

Requirements gathering and analysis

The first phase of the software development life cycle (SDLC) involves the software development company developing an acute understanding of the client’s requirements and objectives. 

This phase typically involves discussions, focus groups, interviews, and surveys of stakeholders in order to establish the desired functionalities and features of the software and thus arrive at a scope of work. 

Planning and design

The second phase in the SDLC is the creation of a comprehensive project plan.  The plan will outline the project roadmap. This will incorporate a timeline, resources, and – most importantly – deliverables. 

Development

We’re now at the third phase of the software development life cycle. It’s at this stage that development begins, with programmers writing the underlying code for the software. It’s at this point that various methodologies such as Agile come into their own, as they allow for iterative development with regular communication between the developers and the client. 

Testing and quality assurance

With the code complete, the next phase in the SDLC is testing and quality assurance. This can involve unit testing, integration testing, system testing and user acceptance testing (UAT).

Deployment and implementation

The final phase in the software development life cycle involves deployment and implementation of the software. This will see the development team working with the client to establish the software environment and migrate data etc if necessary. 

Maintenance and support

Depending on who you ask, the software development life cycle (SDLC) also includes an ongoing maintenance and support phase – where the development agency provides regular updates, bug fixes and security patches etc. 

What are the different types of software development methodology? 

What then are the most common types of software development methodology? We have detailed the most commonly used methodologies (also known as software development life cycles or software development processes) below. 

Code and fix

Code and fix

The simplest type of software development process is ‘code and fix’, in which a single developer considers the purpose of a program, writes the code to realise said program, and then releases (implementing fixes along the way). 

As you’ll appreciate, this is a remarkably simple process – which is precisely why many critics don’t consider it to be a software development process at all. It’s certainly not suitable for software development projects of any meaningful size or complexity. 

Agile software development

Agile software development

Even if you’ve only a passing interest in software development, you’re likely to have heard of the Agile methodology. It’s become incredibly popular amongst development firms ever since its launch in 2001. The methodology has its roots in the Manifesto for Agile Software Development10 which was the founding manifesto of the principle. 

It’s important to note that the term Agile doesn’t refer to a single methodology, but rather a framework of approaches to development. As a common theme, however, these various frameworks share an embrace of iteration and continuous feedback. Advocates argue that the Agile methodology is a fundamentally more ‘human-friendly’ approach to software development. 

Some of the most popular Agile frameworks include: 

  • Adaptive software development (ASD). 
  • Agile modelling. 
  • Agile unified process.
  • Disciplined agile delivery. 
  • Dynamic systems development method (DSDM)
  • Extreme programming. 
  • Feature-driven development. 
  • Lean software development
  • Lean startup. 
  • Kanban
  • Rapid application development (RAD). 
  • Scrum
  • Scrumban.

Those bullet points that are highlighted in bold above are those that are amongst the most popular amongst software development firms. 

Within Agile development, there are a series of concrete practices that are employed throughout the project life cycle. These include (but are not limited to); acceptance test-driven development (ATDD), Agile modelling, Agile testing, backlogs (Product and Sprint), continuous integration (CI), domain-driven design (DDD), iterative and incremental development (IID), story-driven modelling, test-driven development (TDD), velocity tracking and more. 

These practices cover areas like requirements, design, modelling, coding, testing, process, and quality. 

Regardless of which Agile framework is used, iteration is a common theme. With iteration at the heart of Agile development, pieces of software can be released in iterations – providing a faster route to market, access to user feedback on an ongoing basis and, of course, a quicker path to revenue generation. 

Waterfall software development

Waterfall model

Another software development methodology that has fairly widespread adoption is ‘waterfall’. 

In contrast to Agile development, waterfall development has a sequential morphology in which the development process follows a stepped process (akin to a waterfall – hence its name). 

These ‘stepped’ phases will typically look something like this: 

  • Requirements analysis (to create a software requirements specification). 
  • Software design and development. 
  • Implementation. 
  • Testing. 
  • Integration (if required e.g. there are multiple subsystems to be integrated). 
  • Deployment. 
  • Maintenance. 

It is widely accepted that the coining of the term (and the methodology it describes) was the work of Winston W. Royce in 197011.

Although following a sequential morphology, waterfall development does allow for a degree of overlap and splashback between phases – accepting that software development can involve unexpected change requests. 

Waterfall development also places a strong emphasis on documentation, with written documents, formal reviews, and approval/sign-off all maintained throughout the project. 

In short, the linear shape of waterfall development makes it easy to understand and manage. However, this can come at the cost of flexibility and cost. 

Rapid application development (RAD)

Another iteratively-focused development methodology, rapid application development is an approach that leans heavily on prototyping. 

The RAD process consists of four steps; requirements, user design, construction, and cutover. 

RAD model

In the first step – requirements – a mix of preliminary data models and business process models (using structured techniques) are developed to arrive at a set of requirements. 

The next two steps – user design and construction – are worked on in tandem, repeatedly creating a series of prototypes until it is confirmed that a prototype has been created that meets all the requirements. 

The final step – cutover – sees the finished product deployed. 

RAD has a sizable fan base amongst development companies due to its ability to progress time-sensitive projects. However, it is not without its critics – including those who argue that it is only suitable for small-to-medium sized projects. Furthermore, for RAD to be effective, it requires stable teams with deep subject knowledge – not all teams can meet these requirements. 

Spiral development

Spiral model

One day in 1988 a new software development methodology entered the world – spiral. Introduced by Barry Boehm – an American software engineer – the spiral methodology combines elements of the waterfall approach and rapid prototyping methodologies. 

Spiral is defined by four fundamental principles:

  • A focus on risk minimisation, breaking a project down into smaller steps. 
  • Each cycle of the spiral involves progressing through the same sequence of steps for each part of the product. 
  • Each cycle around the spiral passes through four quadrants; determine objectives, evaluate alternatives, develop deliverables, and plan the next iteration. 
  • Begin each cycle by identifying all the stakeholders and their respective ‘win conditions’. 

The spiral development methodology has won plaudits for its suitability for larger, more complex software projects. This is in large part due to its emphasis on risk analysis and minimisation. The repeated steps also act as an integrated quality assurance process, resulting in higher quality outcomes, more quickly. 

Like other methodologies, spiral does have its drawbacks – such as not being suitable for smaller projects – but, by and large, it is a methodology which has seen widespread adoption. 

Shape Up

Perhaps the newest methodology in this article, Shape Up is a development approach that was developed by Basecamp in 2018.

The Shape Up methodology consists of three phases: 

  1. Shaping – where the project is ‘shaped’ by an individual who will be working on the project (each team member has the opportunity to undertake their own ‘shaping’). The shaping process is only loosely defined, but should include defining a narrow problem, setting a fixed time in which to solve it, and pre-empting risks.
  2. Pitching – once a project has been shaped, individuals pitch their ‘shapes’ to the team as a whole. The leadership selects the winning pitch. Once accepted, the team is expected to coalesce and focus on the selected pitch.
  3. Building – projects are assigned as a whole, not broken down into tasks. The team then works collectively to solve sections of the project (with each section being called a ‘scope’). The project is completed once each scope has been finished. 

Shape up model

As you can see, the Shape Up methodology diverges quite considerably from other methodologies. There are no backlogs, sprints, tasks, or velocity tracking – putting clear water between Shape Up and competing methodologies like Agile and waterfall. 

Despite being unorthodox in nature, Shape Up has been taken up by a variety of software companies. 

Other software development methodologies

Whilst we’ve detailed the most commonly used software development methodologies, there are others that can be considered ‘advanced’ in the sense that they can require greater degrees of management skill. Examples include: 

  • Behaviour-driven development. 
  • Chaos model. 
  • Lightweight methodology. 
  • V-Model.
  • Unified Process. 

Software development practices

Aside from the methodologies outlined above, there are a number of other software development practices that are typically incorporated into a software development project. The first of these is DevOps. 

What is DevOps? 

DevOps principles

The term DevOps refers to a set of practices, tools and knowledge that automate and integrate the processes between software development and IT teams. 

It’s the aim of DevOps to shorten the software development life cycle whilst maintaining a high level of software quality. 

Learn more about how DevOps can enhance your operations and help your company lead the market: DevOps for SaaS Projects

Continuous integration (CI) / continuous delivery (CD)

CI/CD pipeline

Another two pieces of best practice that are typically incorporated into software development projects are continuous integration (CI) and continuous delivery (CD). 

The first of these – CI – is a process whereby code is automatically integrated into the software project. This allows developers to merge their code changes into a central repository – reducing version control issues. Continuous integration has its roots in the early 1990s when American software engineer Grady Booch first proposed the use of the term.

CD is a process where code changes are automatically deployed into a testing/production environment. Continuous delivery will usually follow a continuous delivery pipeline, where automated builds, tests, and deployments are aligned in one release workflow. 

Stakeholder engagement and involvement

Yet another ‘best practice’ that should be incorporated into a software development project is solid stakeholder engagement and involvement throughout the dev life cycle.

Ideally, the development team should engage in ‘continuous feedback’ where each release is evaluated in order to improve future releases. 

Most importantly, once a piece of software has been launched, end users/customers should be asked to provide input and feedback about the impact of changes. This feedback should then ‘turn in on itself’ with it being used to improve the developer’s development processes. 

By incorporating stakeholder feedback into the software development process it ensures that the final product will meet both user needs and business objectives. 

Some user feedback best practices include: 

  • Target the right audience in the first place! You should carefully define who the target users of the software are. 
  • Use unbiased and neutral questions to ensure the feedback isn’t ‘weighted’ in one direction over another. 
  • Ensure feedback is collected at the right time. Users need time to actually use the software. 

Some of the best ways to collect user feedback for software development projects include: 

  • Exploratory interviews. 
  • Testing prototypes. 
  • Testing work-in-progress software. 
  • Analysing quantitative feedback via software analytics. 
  • Net Promoter Score (NPS).

How is software written? 

By now, we’ve defined what software is, the inputs that go into its creation and the methods that can be followed to guide its development. But, how is software actually written?

In this next section, we’ll explore the programming languages that are used to actually create software and bring it to life. 

Caveat

Now, we must begin this section with a caveat. Since the invention of high-level programming languages in the 1950s literally hundreds have been invented. For the sake of brevity and clarity, we will explore only the most commonly used programming language below. 

Machine code vs assembly language

It’s important to raise the point that there is a distinction typically made between machine code and assembly language. 

Machine code is computer code that is used to control a computer’s central processing unit (CPU). Note – each architecture family (e.g. ARM) has its own instruction set architecture and by extension its own machine code language. 

Assembly language, on the other hand, refers to any low-level programming language that has a close (but not exact) resemblance to the computer’s machine code. Assembly language (sometimes also referred to as assembly code) must be converted into machine code by an assembler (a form of utility program). 

To make things a tad more confusing, there is a distinction to be made between assembly language – which is considered to be a ‘low-level programming language’ and high-level programming language. 

So, what’s the difference between these two types of programming language? Put simply, low-level programming languages align fairly closely with machine code. High-level programming languages are more abstract in relation to machine code. However, by being more abstract, high-level programming languages are easier for humans to understand, interpret and manipulate. 

Note – high-level programming languages can be further categorised. Domain-specific languages (DSLs) are specialised to a particular application domain. General-purpose languages (GSLs) are, on the other hand, designed to be applicable across a range of domains.

A note on compilers, libraries and execution

As an historical aside, high-level programming languages emerged in tandem with compilers. 

This was a necessary and logical development as abstract programming languages must be translated into machine code – it is the compiler that facilitates this. 

The steps a compiler takes are generally as follows: 

  • A compiler receives source code (in the form of a high-level programming language) and proceeds to undertake lexical analysis. This involves the compiler ‘tokenising’ the code – breaking it down into keywords, identifiers, operators, or literals.
  • The compiler will then begin parsing the data, undertaking syntax analysis. This step is effectively ‘checking’, where the compiler checks the code against the grammatical rules of the programming language (using a parse tree).
  • The third step is semantic analysis, where the compiler checks for type errors. Depending on the compiler, this stage may also see ‘intermediate representation’ where the compiler generates a more machine-independent representation of the code.
  • Step four is ‘code optimisation’, which may see the compiler optimising the code via techniques like the removal of redundant code, reordering instructions etc. 
  • It’s at step five that full translation into machine code occurs. The compiler will take the optimised intermediate representation and turn it into machine code which can be executed by a computer’s CPU. 
  • Should a program consist of multiple source files, then the compiler will create object files for each file. A ‘linker’ will take each object file (along with any libraries) and combine them into a single executable file. 

Note – the above steps are a generic overview of compiler operation. The specific steps and techniques can vary from compiler to compiler. 

What are the different high-level programming languages used in software development? 

In this next section we’ll examine some of the most prevalent examples of high-level programming languages. 

JavaScript

JavaScript is likely something you’ve heard of even if you’re not a developer yourself. 

Often abbreviated as JS, JavaScript can trace its origin to the early 1990s when the Netscape corporation wanted to create a ‘language for the masses’12 to help non-programmers better create their own interactive websites. On beta launch in September of 1995 the language was named LiveScript. The name was changed to JavaScript when the language was officially released in December 1995.

Alongside HTML and CSS, JavaScript is a ‘core technology’ of the web. It is widely claimed that 99% of websites use JavaScript on the client side for webpage behaviour (e.g. to create drop-down menus, facilitate concertinas, or other moving elements on a site)13.

JavaScript is governed by ECMAScript – a specification to which JavaScript, JScript, and ActionScript should be measured against. 

Note – although HTML and CSS are grouped alongside JavaScript as core technologies of the Internet, they themselves are considered to be programming languages. This is because they do not have the ability to perform logical operations. 

Java

Although the two are often confused, Java is a distinct programming language compared to JavaScript. 

Originally developed at Sun Microsystems by Canadian computer scientist James Gosling, Java was released in 1995 and was designed to have as few implementation dependencies as possible. In fact, Java was developed under the write once, run anywhere (WORA) mantra14

Java has found extensive application, being used in everything from web apps to Internet of Things (IoT) devices. 

Python

Python – which consistently ranks as one of the most popular programming languages – was first conceived in the late 1980s by the Dutch programmer Guido van Rossum. Since then, various iterations of Python have been released.

Python has found many applications, including; as a scripting language for web applications, natural language processing, artificial intelligence projects, machine learning projects, and graphical user interface (GUI) purposes.

SQL (Structured Query Language)

SQL is a highly-popular domain-specific programming language that is primarily used to manage data (it finds particular use in handling structured data where relations between entities and variables are especially important). 

First introduced in the 1970s, SQL has been widely adopted – so much so that it became a standard of the American National Standards Institute (ANSI) in 1986. It was subsequently adopted as a standard of the International Organisation for Standardisation (ISO) in 1987.

The roots of SQL stretch to IBM when two American computer scientists – Donald D. Chamberlain and Raymond F. Boyce who wanted to create a relational database language (apparently this was after the pair had learnt about Edgar F. Codd’s invention of the relational model for database management). 

SQL is considered to be a ‘backend’ language due to its ability to manage and query databases. 

C

C is one of the oldest and most widely used programming languages having been created in the 1970s by American computer scientist Dennis Ritchie. 

It has found particular use in the development of operating systems (including Windows), but has also been used in the creation of application software. 

So, why has C persisted for so long? It’s down to a number of beneficial properties including; the structured programming approach that it offers, its portability (being machine-independent), its rich library functions that expedite the dev process, and its speed – with it being a compiler-based language. 

The C programming language also continues to rank near the top of the TIOBE (The Importance of Being Earnest) Index – a monthly ranking of the most popular languages.

C++

Another programming language with a strong pedigree, C++ is – as you can probably guess – an extension of the C programming language. 

Created by Danish computer scientist Bjarne Stroustrup, C++ was first released in 1985 as a general-purpose programming language that has object-oriented, generic, and functional features. 

C++ is, in fact, so prevalent that it has been standardised by the International Standards Organisation (ISO) as ISO 14882:2024. 

Like its sibling C, C++ regularly ranks at or near the top of the TIOBE Index

TypeScript

Designed by Microsoft on a free and open source basis, TypeScript is a high-level programming language that was designed for the development of large applications. It can also be used to create JavaScript applications (for both client-side and server-side executions). 

PHP

PHP is a general-purpose scripting language that was originally created to facilitate web development (this is evident in PHP’s original name – Personal Home Page). 

The development of PHP began in 1993 under the guidance of Rasmus Lerdorf – a Danish computer programmer. It was eventually released in 1995. 

Since then, PHP has found multiple uses – spanning from the creation of dynamic web pages, the creation of web forms, facilitating the sending and receiving of cookies, the creation of content management systems (CMS) and more. 

Perhaps the most prevalent example of PHP’s use is the fact that it sits at the core of WordPress – the Internet’s biggest blogging system.

Today, PHP is most commonly referred to as ‘PHP: Hypertext Processor’ – a recursive acronym.

Software prototyping

As you’ll have seen mentioned above, some development methodologies incorporate a degree of ‘prototyping’ into their processes. 

Therefore, prototyping deserves a discussion in its own right. 

But, what exactly is prototyping?

In the simplest possible terms it refers to the process of developing incomplete versions of a software program in order to assess feasibility and functionality before committing to a full and final build. 

In order for prototyping to be effective, there are several basic principles that must be considered: 

  • Prototyping should not occur in isolation (many people make the mistake of thinking that prototyping is a development methodology in its own right – it’s not). Prototyping instead should be used to try out particular features in the context of a full and proper methodology (such as one of the methodologies outlined above like RAD or Spiral).
  • Prototyping should facilitate the reduction of project risk by breaking it down into smaller segments. This should make the process of change easier throughout the life of the project.
  • Where possible, clients should be actively involved in prototype development.
  • Prototypes should be developed with an open expectation – that a prototype may be abandoned, or further developed into a final working product. 

Underlying all software prototyping is the necessity to have a thorough and complete understanding of the fundamental problem that the software aims to solve. 

Software licensing and copyright

As you’ve read so far, a considerable amount of time, energy, money and intellectual property goes into the development of software. 

It’s for these reasons that operating systems and application software are governed by a variety of legal instruments. 

Since the early 1970s, a variety of licences have evolved which establish the rights associated with the use of software. As we’ll see later in this article, there are many different types of licence. However, they do share some common features:

  • Ownership – the licence specifies the ownership terms of the software. This is typically characterised as follows – you don’t own the software, but you are granted the rights to use it according to the terms of the licence.
  • Usage rights – the licence will set out the ways in which a user can use the software.
  • Restrictions – a software licence will also normally explicitly state what users cannot do with the software.
  • Compliance – software licences normally include a section that sets out the actions the software owner will take should users fail to comply with the terms of the licence. 

What are the different types of software licence? 

From the earliest days of software a varied licensing regime has developed. In the table below, we have set out the most common types of software licences that are in use: 

Free and open Non-free
Public domain (and equivalent) Permissive licences Copyleft Non-commercial licence Proprietary licence Trade secret
Description Waives copyright protection Grants usage rights (including the right to relicence) Not only grants usage rights, but also forbids proprietisation Grants rights for non-commercial use only Traditional use of copyright No information made public.

There is an important distinction to be made at this point. Depending on the jurisdiction, it is the source code of a piece of software that is protected by copyright law. 

The underlying ideas or algorithms of a program are not generally protected by copyright. It’s for this reason that many software companies regard their ideas and algorithms as ‘trade secrets’, and often require their developers to sign non-disclosure agreements. 

Intellectual property and open source software issues

An often overlooked – yet critically important issue in the development of software – is the interplay between proprietary code and open source code/applications. 

In some instances, software developers have been known to integrate open-source code or libraries into a proprietary piece of software. 

Why is this a problem? Because most open-source licences used for software stipulate that modifications be released under the same licence – that is, if a piece of software uses an open-source component, then that software itself must be open source, too. 

The way in which software developers typically overcome these barriers is to either use a proprietary alternative (which adds cost to a project), or create their own non open-source code (which adds additional labour time to a project). 

This is certainly an obstacle, but an important one to consider and overcome if your piece of software is not to breach licences or compliance requirements. 

In need of software development services? 

Then speak to the GoodCore team today. Our bespoke software development services cover every step of the development journey, from ideation to deployment and ongoing support. 

It’s a service which we’ve developed over the course of 19 years and which has received a five-star rating on Clutch. 

Explore our bespoke software development services now.
See our services

REFERENCES

  1. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. P. 16. ISBN 0-619-06489-7.
  2. ISO/IEC 22123-1:2023(E) – Information technology – Cloud computing – Part 1: Vocabulary. International Organisation for Standardisation. 2023.
  3. Watt, Andy (2023). Building Modern SaaS Applications with C# And. NET: Build, Deploy, and Maintain Professional SaaS Applications. Packt. ISBN 978-1-80461-087-9.
  4. Mell, Peter; Timothy Grance (September 2011). The NIST Definition of Cloud Computing (Technical report). National Institute of Standards and Technology; U.S. Department of Commerce. doi:10.6028/NIST.SP.800-145. Special publication 800-145.
  5. Tracy, Kim W. (2021). Software: A Technical History. Morgan & Claypool Publishers. ISBN 978-1-4503-8724-8.
  6. Systems and software engineering – Vocabulary, ISO/IEC/IEEE std 24765:2010(E), 2010.
  7. The history of coding and software engineeringwww.hackreactor.com. Retrieved 2021-05-06.
  8. Bourque, Pierre; Fairley, Richard E. (Dick), eds. (2014). Guide to the
  9. E. W. Dijkstra Archive: The Humble Programmer (EWD)”.
  10. Kent Beck; James Grenning; Robert C. Martin; Mike Beedle; Jim Highsmith; Steve Mellor; Arie van Bennekum; Andrew Hunt; Ken Schwaber; Alistair Cockburn; Ron Jeffries; Jeff Sutherland; Ward Cunningham; Jon Kern; Dave Thomas; Martin Fowler; Brian Marick (2001). “Manifesto for Agile Development”. Agile Alliance. 
  11. Markus Rerych. “Wasserfallmodell > Entstehungskontext”. Institut für Gestaltungs- und Wirkungsforschung, TU-Wien.
  12. Fin JS (17th June 2016), “Brendan Eich – CEO of BraveYouTube.
  13. Usage Statistics of JavaScript as Client-Side Programming Language on Websites”. W3Techs.
  14. Write once, run anywhere?”. Computer Weekly. May 2, 2002.

 

Rate this article!

Average rating 5 / 5. Vote count: 3

No votes so far! Be the first to rate this post.

Hassan Basharat
The author Hassan Basharat
I am passionate about helping organisations navigate the digital landscape and adopt technology to achieve efficiencies, improve customer experiences, build competitive advantage, and grow in a sustainable manner.

Leave a Response

This site uses cookies to give you a better user experience. By browsing, you agree to our cookie policy.