Frequently Asked Questions
Which problems does eccenca solve?
Working with data in organizations is highly complex. Information is siloed, semantics are hidden and correlation to processes and other data sources is hardly documented. As a consequence creating IT, supporting business functions or providing data-driven insights has become extremely hard. eccenca cuts through this complexity and make accessing and understanding data easy. By doing so, it is helping organizations to become agile, data-driven and automated because it is helping them to get faster and more powerful value from their data. In a nutshell we turn application-centric companies to data-centric digital natives.
What are typical use cases for an eccenca solution?
- which wants to drive digital transformation,
- which wants to be able to automate with knowledge,
- which wants to be able to integrate and reuse data,
- where data and IT is complex,
- which wants to successfully build Product Digital Twins, 360° GDPR compliance, Supply Chain Network, ITSM automation, skill management or any other solution which needs access to, interlinkage of and context information for a large volume of data.
Which products does eccenca provide?
Which industries does eccenca serve?
We have references with customers in “automotive”, “tele-communication”, “mobility”, “financial services” and “life science”. The technology is domain-agnostic and thus the product does not require customizing to create value in any other industry.
Which countries are served by eccenca?
eccenca has employees living in Germany, the Netherlands, Belgium, France and Spain. From these locations we serve all of Central Europe. The Americas and Asia are currently being served in close collaboration with local partners.
Why should I choose you and not hire my own software development team?
We have spent more than 10 years with a team of 20-30 of the worlds leading linked data experts in building this platform. In addition, we are constantly adding features and functions through our own development, sponsored developments by customers and donations from our partners. In a nutshell, we are 10+ years ahead.
What is an average project duration?
We typically start with a 2-12 weeks PoC. The actual projects that follow highly depend on the scope. We ran production grade projects that took 6 months to activities that follow more of an evolutionary path that have been running for 2 years. You set the scope. We deliver agile. A positive ROI is generally achieved within 6 months.
Does eccenca do a PoC?
Yes. We generally start with a PoC.
Does eccenca provide tool / methodology training?
Does eccenca provide support service?
Most customers subscribe to our silver support package which provides support on workdays from 9am to 5pm. We are offering highly customized support options when ever necessary upon demand.
What the license model of eccenca?
eccenca is available as annual subscription license per production instance. The enterprise license automatically comes with free license for development, testing and staging.
About eccenca Corporate Memory
What is eccenca Corporate Memory?
eccenca Corporate Memory is a knowledge graph platform that builds the foundation to automate with knowledge as well as to master IT and data complexity. When we created the product we designed the knowledge graph platform as a map and documentation of all the experiences (data and knowledge sources) an organization has. It's purpose is to document business rules, constraints, expert knowledge of the inner workings of products and services etc. and combine this knowledge with the data sources. So really it is much more than a “memory”. It could truly become more of a brain for a company. For the moment we are seeing parts of that with clients, that use it to automate highly complex processes and systems like the security infrastructures of banks, huge networks of configurable edge devices or the coordination of global MRO services.
Please also refer to our product page.
Which capabilities does eccenca Corporate Memory provide?
Establish a catalogue of all available data sources and merge this data with knowledge as it is documented in thesauri or ontologies. Once data is catalogued and linked to the ontologies, eccenca Corporate Memory starts to identify linkages between data points and data sets to create a truly transparent and interlinked data platform. Your data become findable, accessible, interoperable as well as reusable throughout the entire company, no matter where the respective data is stored. By this eccenca Corporate Memory provides the capabilities to automate processes and build artificial intelligence technology.
What differentiates eccenca from other knowledge graph vendors?
eccenca is not a graph database / triple store vendor but operate on top. We help our clients with unparalleled tooling to turn their existing (legacy) data sources into a knowledge graph. Literally turning their data silos into actionable data assets.
How can Corporate Memory run in an enterprise environment?
Corporate Memory can run on any infrastructure from your laptop (for evaluation and demo purposes) to local VMs to cloud infrastructure. We natively support docker based deployments by providing docker images as well as "bare metal" deployments through .jar and .war artifacts.
What does the user interface of Corporate Memory look like?
Please check our product page.
How does eccenca connect the systems and and where is the transition point between customer and eccenca?
We connect to your data using files from local or distributed (HDFS) file-system or connection to SPARQL, JDBC and REST APIs. The user interaction happens through the components DataManager and DataIntegration.
Does eccenca use existing ontologies or develop new?
We actually do both. We are comitted to many domain ontology development initiatives, (re-)use existing ones and also individually engineer them for and with our customers.
Which database does eccenca use?
First class citicens are Ontotext, GraphDB and Virtuoso. We do support generic SPARQL 1.1 connector module that can connect to compliant triple stores (e.g. Mark Logic can be used that way).
Could all data be loaded to an Ontotext database?
Yes, in principle our components and all our suggested triple stores support deployment and scalability options that can help you to grow your Knowledge Graph on demand. However, we suggest to start with highly connected and referential data first as this sort of information asset fosters the quick build-up of a relevant Knwowledge Graph.
Is it possible to choose another database provider?
Yes, see above.
Which data formats are supported by eccenca?
As of December 2020:
- multi cvs zip
Is it possible to program the front-end individually?
Our clients love us for the capability to be able to configure our user interface. Check our documentation for more information: https://documentation.eccenca.com/latest/explore-and-author/building-a-customized-user-interface
Developer can build 100% custom solution using our front-end framework (which is the basis for our own application). Also see: https://github.com/eccenca/
Does eccenca use any Microsoft products to run the software?
Is eccenca Corporate Memory an open source software?
eccenca is fully compliant with open standards (RDF, OWL etc.) as set forth by the W3C. The software is not open source which would only be valuable, if the solution were to threaten to create vendor lock-in (like relational databases). But as schema is SPO and the semantics are documented via ontologies, there is no vendor lock-in and thus no need for open source. Anyone still worried can get a source-code escrow agreement.
Does eccenca have an import API?
Yes. Please check our documentation on https://documentation.eccenca.com/latest/develop.
About Knowledge Graph Technology
What is the main benefit of a knowledge graph?
The main benefit of knowledge graphs in general is their ability to create identity for data and thus start giving that data meaning and context. By doing so, knowledge graphs are effectively de-siloeing data because this context used to be available only in combination with application code and human experts at hand. Knowledge graphs take this knowledge and turn it into machine-readable and human-interpretable and actional constraints and rules so that data can be used and reused across the organization regardless of whether access to experts or applications is available.
eccenca is the worlds leading knowledge graph platform because it provides the highest degree of scalable automation in creating, maintaining and sustaining a knowledge graph across dozens, hundreds or even thousands of data sets.
How can a knowledge graph be scaled?
It could either grow in terms of the number of data sets that are linked to the knowledge graphs concepts which are represented by the nodes in the graph. But it could also scale by adding the semantic depth available to describe the data at hand…that is by adding more nodes, connections and business logic that can then be applied to instance data from the various silos.
If you are asking about performance in querying the graph, the graph can be scaled across any number of processing nodes. If you are asking about data ingestion, the ingestion pipelines can for instance be turned into spark scripts and cloud infrastructures and be used to cope with peak loads of data. If you are asking about data volume, the answer is: The knowledge graph does not contain instance data but metadata. This metadata refers thr semantics as well as to data sources or intermediate storage like data warehouses and data lakes to which it can be applied.
What are the differences between relational data bases and eccenca?
They differ in their paradigm of how to represent data. Relational data bases restrict data represenation to a tabular view. Interlinking is hardly possible. And if two tables have comparable data but use different headers for e.g. a name, it takes complex Extract-Transform-Load (ETL) processes to integrate them. A knowledge graph works like a network where every data point is connected by context and enriched by relevant background information (metadata on e.g. restrictions, lineage, constraints etc.). By using ontologies every data point and every relation between data points are consistently defined on a global level. Furthermore, the modeling level translations (conceptual vs. technical) are eliminated. Thus, data integration even of external sources is easy and fast. This provides more flexibility, scalability and agility to companies.
What is an ontology?
Please refer to https://www.wikiwand.com/en/Ontology_(information_science).
What is a knowledge graph and RDF?
Please refer to https://www.wikiwand.com/en/Knowledge_graph.