A major concern in the field of information processing and analysis is to improve the structuring and management of big data sets, aiming to enhance and accelerate data access, in order to simplify the posterior processing by intelligent analytical techniques.
This type of information, which can be obtained from several kinds of sources, is stored in datacenters inside traditional relational databases. Since the access and operation response time in these storage structures is slow as a result of the methods used to arrange data inside them, a process of reorganization of information is required so as to transform the initial relational databases into special analytical structures that aggregate data in multiple dimensions. This process allows to issue queries over data by trading storage space, due to the need of data redundancy in the creation of analytical structures.
One of the main problems of current data storage solutions stems from the high computational cost of security measures due to the inefficiency of block cyphering algorithms like AES and signing procedures used for integrity verification like SHA-1, when used over big data sets. Hence, it is vital to improve the performance of these methods or to devise new algorithms designed specifically for analytical databases.
Furthermore, the adoption of analytical databases is bound to scalability problems due to the storage space they need. To solve the efficiency problems associated to the access to their information, many solutions have been proposed centered on the storage of data in main memory (a concept known as in-memory databases). The greatest advantage of this solution lies in the extremely low response time to queries made by users. Nevertheless, this methodology does not bear well with complex operations so its use is limited to simpler ones.
In Gradiant we are working to improve the behaviour of these systems, allowing their adaptation to several environments and various types of information sources.