IEREK Blog

Large Database Architecture and Design: Mistakes and solutions

1,998

Large Database Architecture and Design: Mistakes and solutions

What does database architecture mean?

Database architecture focuses on the design, development, implementation and maintenance of computer programs that store and organize information for businesses, agencies and institutions. Programming languages are used to design specific software for businesses and organizations. There are several languages for creating databases, such as structured query language (SQL). Database programs include Oracle and Microsoft Access.

This blog discusses the common design mistakes of database architecture and design.

 

Common database design mistakes

  1. Poor design/planning

Good databases are built with consideration and pre-planning. Databases need care and attention to the needs of the project that will be using it and it should be structured to be meet them. If no pre-planning is done, that means the project’s database will deviate off track and lose direction. Poor planning at the beginning will cause the database to be redesigned with major changes in the structure, leading to a big impact on the project as a whole. Therefore, causing the timeline of the project to extend.

  1. Poor naming standards

Consistency is the main aspect in the naming across the databases. Names across the database need to be descriptive, it makes it easy for the users, programmers, and other users to identify their needs and other requests. The storage part of the database should easily identify the names to help the users move faster through their requests. Names should be consistent, where it should be the same across the files in the database, to avoid redundant data and useless files.

  1. Lack of documentation

Databases need to be well documented, because if it is well named in terms of objects, columns etc., it is clear for anyone who uses the database what is it modeling. Databases should include enough information so that if handed over to another user, one can find the data needed to process the requests. Well-documented databases also reduce the chance of having bugs in the code.

  1. Trying to build generic objects

Having one generic stored procedure that can carry out its operations on any selected table can sound time saving for the users. However, generic objects can get harmful to the performance and affect it in the long run, and therefore they should be avoided. The main point of this issue is to avoid coding generic objects.

One way to avoid this is to build a code generation tool in the coding language you are using, for instance, SQL. This can be used to generate a specific tool for each table in the system.

  1. Not using SQL facilities to protect data

Organizations may choose to use different mechanisms to realize confidentiality and integrity protection, as appropriate. If the confidentiality and integrity of SQL Server data aren’t always protected, all the data will be open to compromise and unauthorized modification. Protective measures encompass encryption, physical security of the power where the storage devices reside, operating system file permissions, and organizational controls. Each of these should be applied as necessary and appropriate.

  1. Not using stored procedures to access data

Uses for stored procedures include data-validation (integrated into the database) or access-control mechanisms. Furthermore, stored procedures can consolidate and centralize logic that changed into the beginning implemented in applications. To save time and memory, comprehensive or complex processing that calls for execution of several SQL statements can be saved into stored procedures and all applications that are called approaches. We can use one of the stored procedures by executing one stored procedure from within another.

  1. Ignoring normalization

Normalization is a systematic way to guarantee that a database shape is suitable for general-reason querying and free of certain undesirable characteristics. These characteristics such as insertion, replace, and deletion anomalies that could lead to a loss of data integrity.

The objectives of normalization were stated as follows:

  • To set free the collection of relations from unwanted insertion, update and deletion dependencies.
  • To reduce the need for restructuring the collection of relations, new kinds of data are introduced and thus increase the lifespan of applications programs.
  • To make the relational pattern more informative to customers.
  • To make the gathering of relations neutral to the query statistics, wherein these statistics are vulnerable to exchange as time goes by.
  1. Ignoring the purpose of the data

Data is stored to be consumed later, and the goal is constantly to store it and retrieve it in the most efficient manner. To obtain this, the database designer must know what the facts are going to represent, how is it going to be acquired and at what rate, what its operational extent can be and, eventually, how it is going to be used.

For example, an industrial information system where data records are gathered manually every day will not have the equal records model as an industrial system in which information is generated in real time. This is because it is very different handling a few hundreds or thousands of records per month compared with managing millions of them in the same period. Special considerations must be made by the designers in order to keep the efficiency and usability of the database, if data volumes are to be big.

  1. Using identity column as your only key

An identity column is a column in a database table that is made up of values generated through the database. This is much like an AutoNumber field in Microsoft access or a sequence in Oracle. Due to the concept is very important in database science, many RDBMS systems implement some type of generated key, even though each has its own terminology.

An identity column differs from a primary key in that its values are controlled through the server and commonly can’t be changed. In lots of cases an identity column is used as a primary key.

  1. Lack of testing

Databases, the collection of interconnected files on a server, storing information, might not address the same type of data, databases may be heterogeneous. As an end result, many kinds of implementation and integration mistakes may additionally occur in huge database structures, which negatively have an effect on the system’s performance, reliability, consistency and safety. As a consequence, it is vital to check so that you can achieve a database machine which satisfies the ACID properties of a database management system.

One of the most critical layers is the data access layer, which deals with databases directly during the communication process. Database testing mainly takes place at this layer and involves testing strategies, such as quality control and quality assurance of the databases.

Large Database Architecture and Design is one of the “Architecture, Engineering and Technology” conference topics. This conference will be held in Cairo from 26-28 February 2019.

If you are interested in this field, click on:  https://goo.gl/BA1voy

Leave A Reply