Skip to main content
Version: 2024.4

Installation

Required Bundles

This bundle depends on Pimcore Datahub bundle. This needs to be installed first.

Installation Process

For Pimcore >= 10.5

To install Datahub Simple Rest API for Pimcore 10.5 or higher version, follow the three steps below:

  1. Install the required dependencies:
composer require pimcore/data-hub-simple-rest
  1. Make sure the bundle is enabled in the config/bundles.php file. The following lines should be added:
use Pimcore\Bundle\DataHubSimpleRestBundle\PimcoreDataHubSimpleRestBundle;
// ...

return [
// ...
// make sure PimcoreDataHubBundle is added before to that list
// ...
PimcoreDataHubSimpleRestBundle::class => ['all' => true],
// ...
];
  1. Install the bundle:
bin/console pimcore:bundle:enable PimcoreDataHubSimpleRestBundle

For Older Versions

To install the Datahub Simple Rest API bundle for older versions of Pimcore, please run the following commands instead:

composer require pimcore/data-hub-simple-rest
bin/console pimcore:bundle:enable PimcoreDataHubSimpleRestBundle
bin/console pimcore:bundle:install PimcoreDataHubSimpleRestBundle

Make sure the Datahub bundle's priority is higher than the Datahub Simple Rest API bundle's.

This can be specified as a parameter during bundle enablement or in the Pimcore extension manager.

Bundle Configuration

Configure Search Client

Setup search client configuration in your Symfony configuration files (e.g. config.yaml). See OpenSearch Client Setup or Elasticsearch Client Setup for more information.

Further Configuration Options

Configure index name prefix with symfony configuration:

pimcore_data_hub_simple_rest:

# Prefix for index names
index_name_prefix: datahub_restindex_

# Limit of page size and offset when paging only works via page cursor (and not page numbers anymore).
max_results_window: 10000

# Options to configure indexing behaviour
indexing_options:
assets:

# Enable indexing for exif data
enable_exif: true

# Enable indexing for xmp data
enable_xmp: true

# Enable indexing for iptc data
enable_iptc: true
global_options:

# Enable numeric detection for dynamic objects (like embedded asset meta data, etc.)
numeric_detection: false

# Enable date detection for dynamic objects (like embedded asset meta data, etc.)
date_detection: true

# Configure number of shards for created indices
number_of_shards_config:

# default number is picked if no index specific settings is set
default_number: 1

# Define number of shards for certain indices. Define index name (without -odd/-even postfix) as key, and number of shards as value.
index_specific: []

# Configure index queue processing via symfony messenger
messenger_queue_processing:

# Activate queue processing via symfony messenger.
activated: false

# Lifetime of tmp store entry for current worker count entry. After lifetime, the value will be cleared. Default to 1 hour.
worker_count_lifetime: 3600

# Count of items processed per worker message.
worker_item_count: 400

# Count of maximum parallel worker messages for queue processing.
worker_count: 3

Supported OpenSearch version: 2.7 Supported Elasticsearch version: 8.0

Index Processing

To make sure indexing queue is processed and index is filled, there are two possible ways:

Command Based

For command based queue processing, following command has to be executed on regular base, e.g. every 5 minutes.

*/5 * * * * php /home/project/www/bin/console datahub:simple-rest:process-queue 

Symfony Messenger Based

For symfony messenger based queue processing, at least following configuration needs to be done in symfony configuration:

pimcore_data_hub_simple_rest:
messenger_queue_processing:
activated: true

If activated, the processing is kicked off automatically with the datahub_simplerest_update_queue_dispatching maintenance task.

In addition to that, following settings are available. They all have meaningful default values though:

  • worker_count: Count of maximum parallel worker messages for queue processing
  • worker_item_count: Count of items processed per worker message.
  • worker_count_lifetime: Lifetime of tmp store entry for current worker count entry. After lifetime, the value will be cleared.

Messages are dispatched via pimcore_index_queues transport. So make sure, you have workers processing this transport when activating the messenger based queue processing.