Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Link to the Swagger documentation
Open Content Swagger REST-API documentation
The link above assumes that you are running Open Content locally. The api docs can be found at http://localhost:8080/opencontent/apidocs/
REST API for content There is a Swagger REST API available for adding, modifying and deleting content as well as performing various kind queries. The query syntax is following the standard Solr syntax, but is also adding a set of extra comfort functions , like related content.
REST API for admin There is also REST API available for all kind of administration issues, like index, properties, extraction and storage management.
Event log API The events for the last 30 days are recorded and stored in the event log, accessible using the event log api.
Read more about the Open Content Rest API and you could also even try it yourself.
Onboarding We offer onboarding, on location or remote, for Open Content developers to get the most out of the available tools and solutions.
Open Content Notifier
A module to create an event driven workflow. It enables you to listen to changes for specific queries and get notified when the answer for that query is changed.
Notifier can be used to release cache, or send notifications to Live Content Cloud. Possible to use Notifier for any http POST receiving server. Register the url towards which the notification details should be sent.
Documentation about the Open Content Notifier can be found here:
https://naviga-hub.atlassian.net/wiki/spaces/NP/pages/6558221758/Open+Content+Notifier+English
In parallel, we are working on a modern, easy-to-consume JSON based version of the document format. That format will successively replace the NewsML XML format. More info about the new Naviga JSON Document format is found https://app.gitbook.com/@infomaker/s/document-format-v2/.
Remove all objects uploaded so that it can be done again
This exercise shows how to delete objects in Open Content.
./delete.sh [uuid] will delete object with the specified uuid
will delete objects with source set to ./delete-mine.shlab-$(whoami)
the script in this exercise sets the source to lab-$(whoami)
The exercise will upload the Article holding reference to previous uploaded objects.
This exercise shows :
Upload of an article with correct filename, script will insert the uuid and filename for the 3 images
This exercise will upload 6 concepts to Open Content. These Concept are referenced from the article which will be uploaded in lab 3
The script ./upload-concepts.sh will upload 6 concepts to Open Content using a curl multipart POST request.
For more details how this is done take a look at the script:
cd ~/oc-lab/lab-newsitem/1-concept
less upload-concepts.sh
Image upload to Open Content calculating the correct filename for proper use when Open Content is used as the content storage for Writer Articles.
This exercise shows :
Calculate the filename when image is used by writer
Using openssl
Create preview and thumb to be used by Open Content
Create xml metadata file
Upload of Image with preview, thumb and metadata file
For more details see the ./upload-image.sh file.
When uploading images for Digital Writer the image file needs to be upload to an internal S3 bucket and also be copied to an external S3 bucket with a calculated filename.
Open Content: Make your content available for both your users, developers and readers
If you are thinking about creating your own headless CMS, Open Content will fit right in, and will solve several tedious parts of your journey ahead – storage, API’s, scalability, authentication and indexing just to name a few.
Open Content is a handy toolbox: we use it in our own solutions, for example as content backend for our Digital Writer and Newsroom apps as well as powering the Naviga web presentation layer. We also use it in our XLibris archive solution. Our customers uses it to power in-house built presentation solutions.
Together with the Naviga Creation and Presentation tools, Open Content delivers a standardised easy-to-maintain setup. You can also use Open Content as a content agnostic storage and search engine for digital content.
Any digital material can be stored in Open Content using the Open Content REST-API. Open Content configuration makes it possible to group content into Content Type (typically : Article, Image, Page, Concept, Planning, Lists, Packages). Content from different systems can normalised into the same Content Type.
A Content object (item) consists of a primary file and a metadata file describing the the primary file.
Normally xml-metadata files are used to describe the content uploaded and properties are extracted from the meta data files using expressions.
Open Content is configured with a browser based UI or by YAML-files.
Typical use cases for Open Content are:
Long-time archive with the XLibris search client
Learn how to perform search towards Open Content /search/ endpoint
A small tool for Open Content API test can be used for easier learning of Open Content search API. The UI can be accessed from the url http://localhost:8800 when Open Content docker-compose is started. T
The configuration in these exercises uses nested properties and assumes that you have the content from lab 1-3 uploaded.
Important: To enable a scalable and predictable solution, some old features have been removed:
Property extraction based on relations between content types has been removed. In OC 3.0 version you need to supply all metadata needed for property extraction within the content item itself.
Query time evaluations of XPath expressions has been removed from the Search API. Previously, if the value of a configured property was not indexed, OC would fetch the document and evaluate the XPath for the missing properties before returning the search result to make sure they were always included. In OC 3.0 only what is indexed will be returned in a search result. If you change the properties config, a reindex of the content is needed.
All info about every Open Content version, release notes, upgrade info, admin guides etc.
Open Content is the content repository of the Creation and Presentation universe.
This book is intended for anyone managing or integrating with Open Content. If you're new to Open Content, we recommend starting with the . If you're a developer, feel free to jump straight to the .
We urge you to reach out to us at if you have any questions. Certain sections are still incomplete, and in other sections we have yet to define well documented best practices.
All your digitally produced and published content in one place, searchable in a single interface – the XLibris web application.
Searching in XLibris is fast, even in an archive with over 25 million objects. You can use the query syntax in a really simple manner, just like a Google search. But there is also a powerful query language available behind the scenes if you are interested in power searches.
The developer friendly availability platform A well documented and flexible platform that makes all content available, all the time.
The backend for your headless CMS A headless CMS without a content repository is like an electric car without batteries. Instead of building batteries build your chassis.
Built for Amazon AWS Run Open Content in AWS, then we can handle upgrades and changes with zero downtime with unlimited storage and backup possibilities.
Integrated to Naviga Content solutions Works out-of-the-box with solutions such as Newspilot, Digital Writer, Dashboard and Naviga web.
API’s for everything Use our user interfaces for admin and search, or use the OC REST API:s. Regardless of approach, it’s all open for integration.
Reliable backend Spend less time on server issues and let us manage the hosting. Open Content supports a range of different setups, from a small single-node setup to large, clustered, high availability setups.
Indexing is done using Solr, an open source enterprise search platform built on Apache Lucene™, making your content accessible for any purpose.
Different content types (for example articles, images, lists, graphics) are separated and has their own specific properties setup. Relations between content items can be easily created, minimising the amount of requests needed to fetch the content.
We offer a standard OC setup for both content production as well as presentation, built on best practices. The standard setups are used with the Naviga Creation and Presentation Platforms.
It’s not a video- or streaming platform. If you want to store and edit streamed content, we recommend to use a specialised platform serving that purpose, like Flowplayer, Youplay or Youtube. It may be convenient to have access to such content within Open Content and then it’s just to add those objects and a subset of metadata to the Open Content as well, with a link to the original source.
Everyone expects nothing less than information in near real time. Live Content Cloud is used when you want to push data to subscribers. Used in our App Platform for live updates of already downloaded content, or personalized push notifications based on OC Concepts.
Query Streamer is a clouded “subscription service” for tools and presentation clients. You are able to set up a stream, a question like “sport content” and then subscribe to changes to the subscribed content stream and get notified in near real time. When a new item matches, Query Streamer notifies the subscriber/s.
QS uses Elastic Search Perculation in a cluster config as an Amazon Service. Subscriptions are persisted in QS.
Infocaster is the part that distributes the output from the Query Streamer (or other sources) to end subscribers. Written in node, It runs stateless at AWS, as scalable Docker instances with a load balancer. A message is sent as push notifications (SNS) or event via a SQS que.
Identifiers as a feature is removed.
The import metadata rules function is removed.
Default search response properties can’t be configured anymore. The client should always specify what properties it wants in the search response. If the client does not specify any properties all will be returned.
Proven solution Used daily by thousands of Creation users, as well as powering hundreds of apps and sites all over the world.
Content Types Open Content configuration makes it possible to group content into Content Type (typically : Article, Image, Page, Concept, Job, Planning items, Lists, Packages). We have a standardised konfiguration for all tools in the Creation suite.
Back-end for web and mobile publishing using Naviga web
Content repo for the Content Creation Suite
OC Concepts is an entire metadata universe – all stored and made available in Open Content
OC Concepts is a metadata structure, built around the IPTC NewsMLG2 standard. One of the most important parts of that is of course how to use it. For the editor, the developer as well as the end user. All concepts are stored and made available in Open Content.
In our view, metadata like categories and tags are not just text strings. Instead, each metadata is an object – each with a unique id, name and its own set of metadata and links.
Like an author. It could be just a name, But when you think of it as an object with a unique id, first name, last name, email, phone, description, avatar image, high res image and links things get really powerful.
These can be shown in your frontend if you want to, for example when showing articles for a specific category on a search page could then show the long description or image for that category.
Examples of Concepts:
Author
Category
Persons
Organisations
Topics
Places (poI:s or geo areas)
Story
Functional tags
The concepts are administered using our Dashboard application, your journalists use Digital Writer to choose the right concepts, and Everyware and the App Platform will show and let the user follow selected topics or geo areas.
Update of the settings file
The exercises can be downloaded from
# In terminal do
cd ~/oc-lab
mkdir lab-newsitem
cd lab-newsitem
curl -s https://s3-eu-west-1.amazonaws.com/open-content-artifacts/lab-newsitem.zip --output lab-newsitem.zip
# this will download the lab-newsitem.zip, unzip it
unzip lab-newsitem.zipStructure of lab-newsitem dir
lab-newsitem/
├── 0-config
│ ├── configure.sh
│ └── lab-newsml-config.yml
├── 1-concept
│ ├── 29889da3-e930-4846-a12b-096508e1054d
│ ├── 8c7437ce-a7ca-414d-8bfc-7bf2d1054fc3
│ ├── 9197a3ea-9624-404a-aef5-4d80eaadc99f
│ ├── b7399f0c-fb3d-4a4f-b849-9935a77d9512
│ ├── db09e859-43d4-42f8-a6ca-c810b653ec6a
│ ├── fb5911fa-b97f-436e-83f7-de7f7a203ea9
│ ├── upload-concepts.sh
│ └── uuids
├── 2-upload-image
│ ├── image-template.xml
│ ├── one.jpg
│ ├── one.jpg.uuid
│ ├── three.jpg
│ ├── three.jpg.uuid
│ ├── two.jpg
│ ├── two.jpg.uuid
│ └── upload-image.sh
├── 3-upload-article
│ ├── article.xml
│ └── upload-article.sh
├── 4-search
│ └── readme.md
├── 5-delete
│ ├── delete-mine.sh
│ └── delete.sh
├── 6-event-sourcing
│ └── listen.sh
├── build.sh
├── lab-newsitem.zip
├── readme.md
└── settingsThe settings file holds information about the host, user and password for the Open Content to be used with the script in exercises directories. Update the settings file to the Open Content you will use.
Upload and activate the configuration for an Open Content using Newsitem
This lab configures Open Content using editorial standard config
The script ./configure.sh in ~/oc-lab/opencontent-configs will configure Open Content:
cd ~/oc-lab/opencontent-configs/scripts./configure.sh \
http://admin:admin@localhost:8080/opencontent \
editorialVerify the configuration in Open Content admin UI (http://localhost/admin)
Activate the config either using the admin UI or curl below:
curl -u admin:admin \
-X POST "http://localhost:8080/opencontent/admin/configuration/activate" \
-H "accept: */*" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "reason=configured from script&name=$(whoami)"Examine the + Configuration menu and more in Open Content admin UI
History
Compare (remove something)
Import/Export
Undo configuration
Replication is an Open Content Service responsible for copying items between different Open Content instances
Open Content Replicator is a module that allows the Open Content to replicate content to an Open Content Satellite. The OC Satellite works as a "read only" and can store anything from all of the information in the Master to a part of it.
The Replicator can be run automatically in near real time and/or be triggered manually in batches.
For example, a satellite can consist of content with a specific setup of meta data (products, categories etc.) and another satellite can have a different content
Open Content Replicator is in this environment:
http://localhost:8180/replicationThe replicator can be used to replicate objects from one Open Content to another Open Content.
Different types of replication exists:
Full replication; replicate objects using a query
Incremental replication; replicate object on incremental re-indexing event. Uses RabbitMQ Indexer needs to be configured for this.
used for replication between editorial and public Open Content.
Batch replication; replicate objects on batch re-indexing event. Uses RabbitMQ Indexer needs to be configured for this.
almost never used
Partial replication; updates target Open Content for filter changes
not used
Event-log replication; polls event-log or content-log
export OpenContentIp=127.0.0.1
export pemfile=
OC_USER=
OC_PWD=curl -s -u admin:admin "http://localhost:8080/opencontent/search?" | jq .curl -s -u admin:admin "http://localhost:8080/opencontent/search?\
properties=uuid" | jq .curl -s -u admin:admin "http://localhost:8080/opencontent/search?\
q=contenttype:Article&\
properties=uuid"curl --globoff -s -u admin:admin "http://localhost:8080/opencontent/search?\
q=contenttype:Article&\
properties=uuid,ConceptRelations[ConceptName]" | jq .curl -s --globoff -u admin:admin "http://localhost:8080/opencontent/search?\
q=contenttype:Article&\
properties=uuid,ConceptRelations[ConceptName]&\
filters=ConceptRelations(q=ConceptName:Weekend)" | jq .Everything is connected – when you find a page, you will instantly see all articles and images published on that page. When you find a job, you will find other content that belong to the same job.
Build your own User Interface and workflows We have customers who have created their own workflow for both importing, searching and using images in Open Content. Archive historical content Scanned newspapers in PDF-format, with a predefined naming standard can also be imported and made searchable in XLibris.
How to run Open Content in Docker on my own computer.
The Docker images for Open Content are primarily for development purposes, not production. So if you are a developer looking for how to start Open Content locally for integration testing or trying things out, then this is for you.
# create a directory where to work, in home oc-lab
cd
mkdir oc-lab
cd oc-labWait until all containers are downloaded and started. Now there is an empty Open Content without configuration or content.
Configuration is done using the admin UI or the admin API. The UI can be found here .
Below is the menu for the Open Content admin UI.
The first thing that has to be configured is storage. This can either be done in the UI at or with this curl command:
Open Content configuration in this setup is done using a local copy of our Bitbucket repository for configuration. Use the Open Content admin UI to inspect the detailed settings for the different configuration options.
Go to the opencontent-configuration directory where the configure.sh script is
Configure Open Content for public use
Configure Open Content for public and app use
Configure Open Content for editorial use
Activation of the configuration
The Naviga content could act as a standard end-to-end solution. You use our standard authoring setup in combo with our solutions for presentation on the web and in mobile apps. In that case, we are managing everything from setup, configuration, hosting, support etc. You are still able to interact with the backend, but we recommend to use our more high-level API:s for content creation (like ingestion of content) instead of using the more low level OC API.
You can also use Naviga content solutions as a headless CMS, and build your own presentation layer. In that case, we recommend to use our content distribution API to power your own presentation solution. You may also use the more low level OC REST API to power your presentation layer. The distribution API also offers a cache solution. If you use the OC REST API, you need to add your own cache mechanism between OC and your presentation engine. It's possible to just scale up the read capacity of Open Content, but that will be a quite expensive solution in most cases.
Both solutions uses a separate Open Content for production, and one to power presentation layers. When a content item, like an article, is ready to be published (useable) it's copied to the public content repo by the Replicator service.
The architecture describes the upcoming 3.0 version of Open Content.
The Open Content stack consists of several parts, all running in the Amazon cloud.
Load Balancer. The OC stack uses the standard Amazon application load balancers.
OC API. Is the REST API for queries, read and write, as well as the OC Admin API. Runs in ECS and scales horizontally.
S3 is the storage where all content items are stored.
We always recommend a multi-AZ setup for all parts of the stack. That means the Open Content stack is running on multiple datacenters in parallel, enabling high availability.
For Open Content pre 3.0, you'll need to use the master-satellite mechanism (see below) to reach multi-AZ redundancy.
When using Open Content as a creation backend, we always use a Satellite Open Content for the presentation layers. The production and presentation is totally separated each of them can be configured and scaled in the appropriate way.
We recommend to use the Naviga standard configuration for Creation and Presentation. They are both versioned and maintained by Naviga, and are updated when needed to be in sync with the Naviga Creation and Presentation tools.
Master - satellite In complex environments setting up multiple Open Content Satellites might be a suitable way to scale. All content is stored in an Open Content Master setup, and predefined replication rules make sure the correct content is available in each Satellite. This does not require additional storage, they are setup as read-only OC’s, reading the content from the same S3 bucket, saving both time and money. As content can differ each Satellite maintains its own index.
Upload of a content newsitem to Open Content
This section will show how an upload to Open Content is performed
To be able to do the exercises you may to prepare your system. You need an Open Content Server to perform upload request towards.
You need a bash terminal for execution of the scripts
The prepare Windows section explains how to enable a bash terminal for Windows
Need to have following installed;
aws cli
imagemagick
unzip
The examples will use show how to upload all objects types referenced from an Article.
Concepts
Images
Article
The article has relations to 6 different concepts and 3 images. Certain conventions must be known before uploading these to an Open Content.
Search client (XLibris)
Admin client,
Rest-Api Swagger documentation,
Open Content 3.0 is a major new version. It's not yet released, but is planned for release mid 2020.
The 3.0 version of Open Content is a major upcoming release. A lot of effort, on all levels, have been put into the areas of increased performance, scalability and availability. Many pieces have been optimised, rewritten or redesigned. The APIs are still the same, except for a few functions that has been deprecated.
SolrCloud support Running one single instance of Solr means that you have one single index running on one Solr node. Even if we have quick restore processes, that’s not a redundant solution. With the 3.0 version we have standardized a multi node SolrCloud setup as an option to the standard setup.
The SolrCloud setup runs in a Kubernetes () cluster, starting with 3 Solr nodes plus the necessary orchestration mechanisms. The Solr version used in the 3.0 version is 8.x.x.
# Download the zip file from S3
curl -s https://s3-eu-west-1.amazonaws.com/open-content-artifacts/opencontent-docker-configs.zip \
--output opencontent-docker-configs.zip# Unzip
unzip opencontent-docker-configs.zip# Go to directory
cd opencontent-docker-labdocker-compose -f docker-compose-lab.yml up --detachjq
SolrCloud is the Solr cluster that executes the queries, manages the indexes etc. It's deployed in a EKS cluster, from 1 Solr node and up. We always recommend at least 2 Solr nodes for redundancy.
Binlog is created by the RDS, and contains all modifications to the OC content.
Kafka is a streaming platform where we persist all changes to the content item. It also powers the Indexer services. We use the Amazon managed Kafka service.
The Indexer is the part that extracts the metadata to index and perform the index updates in Solr. The updates are then committed to the index by Solr. The indexer is running in ECS containers and scales horizontally.
The Notifier is used to create event-driven workflows.
We have also offloaded a lot of work from the OC API fronts, like moving the property extraction to the indexer process. The OC API does not share the database with the indexer anymore. This increases the OC API performance in general and also provides a more predictable performance.
Apache Kafka The Kafka streaming platform (https://kafka.apache.org/) is now a part of the Open Content solution. In addition to the classic Open Content event log, all commits (add, update, delete) are inserted into the Kafka log. Kafka is used internally to power the new indexer processes as well as the upcoming Audit Trail module for the Naviga Writer and Dashboard. The complete content item is stored in Kafka (excluding binary artefacts).
Increased upload performance Bottlenecks in the upload process has been identified, fixed and optimised to get the highest possible upload throughput. Upload of content now scales more or less linear with the amount of OC API fronts used.
Increased read performance We have made a set of query and read optimizations and eliminated a couple of bottlenecks. The performance when querying for nested properties is substantially increased. Resolving nested properties is now parallelized to maximize the utilization of the hardware. The number of Solr requests needed for resolving nested properties is also substantially decreased. Using the new SolrCloud multi-node setup is also a good way to scale querying performance adding more Solr nodes. Both the OC API as well as the SolrCloud cluster now scales almost linear in read intensive setups.
Increased index update performance Using Solr sharding we are able to split indexes in smaller pieces and thereby increasing the commit capacity. The indexing process itself has also been re-designed to be more streamlined and efficient. We are now also able to run multiple indexers in parallel to boost the indexing performance.
AWS deployment Open Content 3.0 requires to be deployed in the AWS cloud. The OC 3.0 setup uses AWS services and deployment templates designed for AWS. Note: On premise installations are not supported (on premise installation is possible with Open Content up to 2.2.3).
Metrics Prometheus (https://prometheus.io/) is supported in the new 3.0 setup. The OC API, SolrCloud cluster, Kafka and the indexer processes all expose metrics that can be graphed and acted on.
High Availability
Increase the performance in identified bottlenecks for upload, search, and indexing
No vital single point of failure
Horizontal scaling to gracefully handle large amounts of objects
Show how to use the Open Content event log
In this exercise you will start a script which will poll the Open Content event log every 5th second. If any events are found the script will print information of the event.
The script will persist the last event to a file (lastevent) which holds the last event id processed by the script. When started next time and lastevent file exists the script will start processing events with id larger than the lastevent processed . This means that even if the listener is off. It will continue from last event next time the listener is started. This way no event is missed.
This is how to get the /eventlog/ endpoint response:
Response
Get the last event id use event=-1
docker-compose -f docker-compose-lab.yml logs -f wildflycurl -u admin:admin -d name=OpenContent -d path=/tmp http://localhost:8080/opencontent/admin/storagecd ../opencontent-configs/scripts./configure.sh \
http://admin:admin@localhost:8080/opencontent \
public./configure.sh \
http://admin:admin@localhost:8080/opencontent \
public-app./configure.sh \
http://admin:admin@localhost:8080/opencontent \
editorialcurl -u admin:admin \
-X POST "http://localhost:8080/opencontent/admin/configuration/activate" \
-H "accept: */*" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "reason=configured from script&name=$(whoami)"6-event-sourcingis an example of a bash script which will poll the event log every 5 seconds and print information about what is happening. To try this :Now the script polls the event log every 5 second and print out the events.
Have the listener in a terminal window and add/modify/delete the items in Open Content and see the events.
curl -s -u admin:admin localhost:8080/opencontent/eventlog?event=0| jq .{
"events": [
{
"id": 1,
"uuid": "8c7437ce-a7ca-414d-8bfc-7bf2d1054fc3",
"eventType": "ADD",
"created": "2019-06-14T09:32:23.000Z",
"content": {
"uuid": "8c7437ce-a7ca-414d-8bfc-7bf2d1054fc3",
"version": 1,
"created": "2019-06-14T09:32:22.000Z",
"source": "lab-hans.bringert",
"contentType": "Concept",
"batch": false
}
},
{
"id": 2,
"uuid": "db09e859-43d4-42f8-a6ca-c810b653ec6a",
"eventType": "ADD",
"created": "2019-06-14T09:32:23.000Z",
"content": {
"uuid": "db09e859-43d4-42f8-a6ca-c810b653ec6a",
"version": 1,
"created": "2019-06-14T09:32:23.000Z",
"source": "lab-hans.bringert",
"contentType": "Concept",
"batch": false
}
}
]
}curl -s -u admin:admin localhost:8080/opencontent/eventlog?event=-1| jq .{
"events": [
{
"id": 20,
"uuid": "0a18480e-1486-4ce5-8f61-ebb67d3d8938",
"eventType": "DELETE",
"created": "2019-06-14T09:35:01.000Z",
"content": {
"uuid": "0a18480e-1486-4ce5-8f61-ebb67d3d8938",
"version": 1,
"created": "2019-06-14T09:32:37.000Z",
"source": "lab-hans.bringert",
"contentType": "Article",
"batch": false
}
}
]
}./listen.sh
Url [http://127.0.0.1:8080] :
Username [admin]:
Password [admin]:
0
'lastevent' file is missing last event in Open Content is: 20An overview of the eventlog and contentlog endpoints
The event log tells you what has happened after a last known event. Depending on your use-case you can either process the eventlog from the beginning (it keeps a history of one month), or start at the last event. It's useful to process all retained events if you want to prepopulate a cache, but if you just need it for invalidation of a cache that you start cold and build ad-hoc it makes more sense to start with the last event.
A request to the eventlog looks like this: GEThttps://oc.tryout.infomaker.io:8443/opencontent/eventlog
If called without any query parameters you get events from the start of the log:
If you pass in a negative value, like so GET https://oc.tryout.infomaker.io:8443/opencontent/eventlog?event=-2, you get the last -N events in the log.
The id attribute in the events can be used to paginate though the eventlog. So if we have processed events up until 406374 we would ask the eventlog for all events after it, like so GET https://oc.tryout.infomaker.io:8443/opencontent/eventlog?event=406374:
To fetch the updated object the normal objects endpoint is used GET