Elastic Oracle connector reference

edit

Elastic managed connector reference

edit
View Elastic managed connector reference
Availability and prerequisites
edit

This connector is available natively in Elastic Cloud as of 8.12.0. To use this connector, satisfy all managed connector requirements.

Create a Oracle connector
edit

Use the UI

edit

To create a new Oracle connector:

  1. In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
  2. Follow the instructions to create a new native Oracle connector.

For additional operations, see Connectors UI in Kibana.

Use the API

edit

You can use the Elasticsearch Create connector API to create a new native Oracle connector.

For example:

resp = client.connector.put(
    connector_id="my-{service-name-stub}-connector",
    index_name="my-elasticsearch-index",
    name="Content synced from {service-name}",
    service_type="{service-name-stub}",
    is_native=True,
)
print(resp)
PUT _connector/my-oracle-connector
{
  "index_name": "my-elasticsearch-index",
  "name": "Content synced from Oracle",
  "service_type": "oracle",
  "is_native": true
}
You’ll also need to create an API key for the connector to use.

The user needs the cluster privileges manage_api_key, manage_connector and write_connector_secrets to generate API keys programmatically.

To create an API key for the connector:

  1. Run the following command, replacing values where indicated. Note the id and encoded return values from the response:

    resp = client.security.create_api_key(
        name="my-connector-api-key",
        role_descriptors={
            "my-connector-connector-role": {
                "cluster": [
                    "monitor",
                    "manage_connector"
                ],
                "indices": [
                    {
                        "names": [
                            "my-index_name",
                            ".search-acl-filter-my-index_name",
                            ".elastic-connectors*"
                        ],
                        "privileges": [
                            "all"
                        ],
                        "allow_restricted_indices": False
                    }
                ]
            }
        },
    )
    print(resp)
    const response = await client.security.createApiKey({
      name: "my-connector-api-key",
      role_descriptors: {
        "my-connector-connector-role": {
          cluster: ["monitor", "manage_connector"],
          indices: [
            {
              names: [
                "my-index_name",
                ".search-acl-filter-my-index_name",
                ".elastic-connectors*",
              ],
              privileges: ["all"],
              allow_restricted_indices: false,
            },
          ],
        },
      },
    });
    console.log(response);
    POST /_security/api_key
    {
      "name": "my-connector-api-key",
      "role_descriptors": {
        "my-connector-connector-role": {
          "cluster": [
            "monitor",
            "manage_connector"
          ],
          "indices": [
            {
              "names": [
                "my-index_name",
                ".search-acl-filter-my-index_name",
                ".elastic-connectors*"
              ],
              "privileges": [
                "all"
              ],
              "allow_restricted_indices": false
            }
          ]
        }
      }
    }
  2. Use the encoded value to store a connector secret, and note the id return value from this response:

    resp = client.connector.secret_post(
        body={
            "value": "encoded_api_key"
        },
    )
    print(resp)
    const response = await client.connector.secretPost({
      body: {
        value: "encoded_api_key",
      },
    });
    console.log(response);
    POST _connector/_secret
    {
      "value": "encoded_api_key"
    }
  3. Use the API key id and the connector secret id to update the connector:

    resp = client.connector.update_api_key_id(
        connector_id="my_connector_id>",
        api_key_id="API key_id",
        api_key_secret_id="secret_id",
    )
    print(resp)
    const response = await client.connector.updateApiKeyId({
      connector_id: "my_connector_id>",
      api_key_id: "API key_id",
      api_key_secret_id: "secret_id",
    });
    console.log(response);
    PUT /_connector/my_connector_id>/_api_key_id
    {
      "api_key_id": "API key_id",
      "api_key_secret_id": "secret_id"
    }

Refer to the Elasticsearch API documentation for details of all available Connector APIs.

Usage
edit

To use this connector as a managed connector, see Elastic managed connectors.

The database user requires CONNECT and DBA privileges and must be the owner of the tables to be indexed.

Secure connection
edit

To set up a secure connection the Oracle service must be installed on the system where the connector is running.

Follow these steps:

  1. Set the oracle_home parameter to your Oracle home directory. If configuration files are not at the default location, set the wallet_configuration_path parameter.
  2. Create a directory to store the wallet.

    $ mkdir $ORACLE_HOME/ssl_wallet
  3. Create file named sqlnet.ora at $ORACLE_HOME/network/admin and add the following content:

    WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = $ORACLE_HOME/ssl_wallet)))
    SSL_CLIENT_AUTHENTICATION = FALSE
    SSL_VERSION = 1.0
    SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA)
    SSL_SERVER_DN_MATCH = ON
  4. Run the following commands to create a wallet and attach an SSL certificate. Replace the file name with your file name.

    $ orapki wallet create -wallet path-to-oracle-home/ssl_wallet -auto_login_only
    $ orapki wallet add -wallet path-to-oracle-home/ssl_wallet -trusted_cert -cert path-to-oracle-home/ssl_wallet/root_ca.pem -auto_login_only

For more information, refer to this Amazon RDS documentation about Oracle SSL. Oracle docs: https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070.

For additional operations, see Connectors UI in Kibana.

Compatibility
edit

This connector is compatible with Oracle Database versions 18c, 19c and 21c.

Configuration
edit

Use the following configuration fields to set up the connector:

Connection source
Dropdown to determine the Oracle Source Connection: Service Name or SID. Default value is SID. Select Service Name option if connecting to a pluggable database.
SID
SID of the database.
Service name
Service name for the database.
Host
The IP address or hostname of the Oracle database server. Default value is 127.0.0.1.
Port
Port number of the Oracle database server.
Username
Username to use to connect to the Oracle database server.
Password
Password to use to connect to the Oracle database server.
Comma-separated list of tables

Comma-separated list of tables to monitor for changes. Default value is *. Examples:

  • TABLE_1, TABLE_2
  • *
Documents and syncs
edit
  • Tables with no primary key defined are skipped.
  • If the table’s system change number (SCN) value is not between the min(SCN) and max(SCN) values of the SMON_SCN_TIME table, the connector will not be able to retrieve the most recently updated time. Data will therefore index in every sync. For more details refer to the following discussion thread.
  • The sys user is not supported, as it contains 1000+ system tables. If you need to work with the sys user, use either sysdba or sysoper and configure this as the username.
  • Files bigger than 10 MB won’t be extracted.
  • Permissions are not synced. All documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync rules
edit

Basic sync rules are identical for all connectors and are available by default.

Advanced sync rules are not available for this connector in the present version. Currently, filtering is controlled by ingest pipelines.

Known issues
edit

There are no known issues for this connector.

See Known issues for any issues affecting all connectors.

Troubleshooting
edit

See Troubleshooting.

Security
edit

See Security.

Framework and source
edit

This connector is built with the Elastic connector framework.

This connector uses the generic database connector source code (branch 8.16, compatible with Elastic 8.16).

View additional code specific to this data source (branch 8.16, compatible with Elastic 8.16).

Self-managed connector reference

edit
View self-managed connector reference
Availability and prerequisites
edit

This connector is available as a self-managed self-managed connector. This self-managed connector is compatible with Elastic versions 8.6.0+. To use this connector, satisfy all self-managed connector requirements.

Create a Oracle connector
edit

Use the UI

edit

To create a new Oracle connector:

  1. In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
  2. Follow the instructions to create a new Oracle self-managed connector.

Use the API

edit

You can use the Elasticsearch Create connector API to create a new self-managed Oracle self-managed connector.

For example:

resp = client.connector.put(
    connector_id="my-{service-name-stub}-connector",
    index_name="my-elasticsearch-index",
    name="Content synced from {service-name}",
    service_type="{service-name-stub}",
)
print(resp)
PUT _connector/my-oracle-connector
{
  "index_name": "my-elasticsearch-index",
  "name": "Content synced from Oracle",
  "service_type": "oracle"
}
You’ll also need to create an API key for the connector to use.

The user needs the cluster privileges manage_api_key, manage_connector and write_connector_secrets to generate API keys programmatically.

To create an API key for the connector:

  1. Run the following command, replacing values where indicated. Note the encoded return values from the response:

    resp = client.security.create_api_key(
        name="connector_name-connector-api-key",
        role_descriptors={
            "connector_name-connector-role": {
                "cluster": [
                    "monitor",
                    "manage_connector"
                ],
                "indices": [
                    {
                        "names": [
                            "index_name",
                            ".search-acl-filter-index_name",
                            ".elastic-connectors*"
                        ],
                        "privileges": [
                            "all"
                        ],
                        "allow_restricted_indices": False
                    }
                ]
            }
        },
    )
    print(resp)
    const response = await client.security.createApiKey({
      name: "connector_name-connector-api-key",
      role_descriptors: {
        "connector_name-connector-role": {
          cluster: ["monitor", "manage_connector"],
          indices: [
            {
              names: [
                "index_name",
                ".search-acl-filter-index_name",
                ".elastic-connectors*",
              ],
              privileges: ["all"],
              allow_restricted_indices: false,
            },
          ],
        },
      },
    });
    console.log(response);
    POST /_security/api_key
    {
      "name": "connector_name-connector-api-key",
      "role_descriptors": {
        "connector_name-connector-role": {
          "cluster": [
            "monitor",
            "manage_connector"
          ],
          "indices": [
            {
              "names": [
                "index_name",
                ".search-acl-filter-index_name",
                ".elastic-connectors*"
              ],
              "privileges": [
                "all"
              ],
              "allow_restricted_indices": false
            }
          ]
        }
      }
    }
  2. Update your config.yml file with the API key encoded value.

Refer to the Elasticsearch API documentation for details of all available Connector APIs.

Usage
edit

To use this connector as a self-managed connector, see Self-managed connectors.

The database user requires CONNECT and DBA privileges and must be the owner of the tables to be indexed.

Secure connection
edit

To set up a secure connection the Oracle service must be installed on the system where the connector is running.

Follow these steps:

  1. Set the oracle_home parameter to your Oracle home directory. If configuration files are not at the default location, set the wallet_configuration_path parameter.
  2. Create a directory to store the wallet.

    $ mkdir $ORACLE_HOME/ssl_wallet
  3. Create file named sqlnet.ora at $ORACLE_HOME/network/admin and add the following content:

    WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = $ORACLE_HOME/ssl_wallet)))
    SSL_CLIENT_AUTHENTICATION = FALSE
    SSL_VERSION = 1.0
    SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA)
    SSL_SERVER_DN_MATCH = ON
  4. Run the following commands to create a wallet and attach an SSL certificate. Replace the file name with your file name.

    $ orapki wallet create -wallet path-to-oracle-home/ssl_wallet -auto_login_only
    $ orapki wallet add -wallet path-to-oracle-home/ssl_wallet -trusted_cert -cert path-to-oracle-home/ssl_wallet/root_ca.pem -auto_login_only

For more information, refer to this Amazon RDS documentation about Oracle SSL. Oracle docs: https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070.

For additional operations, see Connectors UI in Kibana.

Compatibility
edit

Oracle Database versions 18c, 19c and 21c are compatible with Elastic connector frameworks.

Configuration
edit

When using the self-managed connector workflow, these fields will use the default configuration set in the connector source code. Note that this data source uses the generic_database.py connector source code. Refer to oracle.py for additional code, specific to this data source.

These configurable fields will be rendered with their respective labels in the Kibana UI. Once connected, users will be able to update these values in Kibana.

Use the following configuration fields to set up the connector:

connection_source
Determines the Oracle source: Service Name or SID. Default value is SID. Select Service Name if connecting to a pluggable database.
sid
SID of the database.
service_name
Service name for the database.
host
The IP address or hostname of the Oracle database server. Default value is 127.0.0.1.
port
Port number of the Oracle database server.
username
Username to use to connect to the Oracle database server.
password
Password to use to connect to the Oracle database server.
tables

Comma-separated list of tables to monitor for changes. Default value is *. Examples:

  • TABLE_1, TABLE_2
  • *
oracle_protocol
Protocol which the connector uses to establish a connection. Default value is TCP. For secure connections, use TCPS.
oracle_home
Path to Oracle home directory to run connector in thick mode for secured connection. For unsecured connections, keep this field empty.
wallet_configuration_path
Path to SSL Wallet configuration files.
fetch_size
Number of rows to fetch per request. Default value is 50.
retry_count
Number of retry attempts after failed request to Oracle Database. Default value is 3.
Deployment using Docker
edit

You can deploy the Oracle connector as a self-managed connector using Docker. Follow these instructions.

Step 1: Download sample configuration file

Download the sample configuration file. You can either download it manually or run the following command:

curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml

Remember to update the --output argument value if your directory name is different, or you want to use a different config file name.

Step 2: Update the configuration file for your self-managed connector

Update the configuration file with the following settings to match your environment:

  • elasticsearch.host
  • elasticsearch.api_key
  • connectors

If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:

# When connecting to your cloud deployment you should edit the host value
elasticsearch.host: http://host.docker.internal:9200
elasticsearch.api_key: <ELASTICSEARCH_API_KEY>

connectors:
  -
    connector_id: <CONNECTOR_ID_FROM_KIBANA>
    service_type: oracle
    api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead

Using the elasticsearch.api_key is the recommended authentication method. However, you can also use elasticsearch.username and elasticsearch.password to authenticate with your Elasticsearch instance.

Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.

Step 3: Run the Docker image

Run the Docker image with the Connector Service using the following command:

docker run \
-v ~/connectors-config:/config \
--network "elastic" \
--tty \
--rm \
docker.elastic.co/integrations/elastic-connectors:8.16.0.0 \
/app/bin/elastic-ingest \
-c /config/config.yml

Refer to DOCKER.md in the elastic/connectors repo for more details.

Find all available Docker images in the official registry.

We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service. Refer to this README in the elastic/connectors repo for more information.

Documents and syncs
edit
  • Tables with no primary key defined are skipped.
  • If the table’s system change number (SCN) value is not between the min(SCN) and max(SCN) values of the SMON_SCN_TIME table, the connector will not be able to retrieve the most recently updated time. Data will therefore index in every sync. For more details refer to the following discussion thread.
  • The sys user is not supported, as it contains 1000+ system tables. If you need to work with the sys user, use either sysdba or sysoper and configure this as the username.
  • Files bigger than 10 MB won’t be extracted.
  • Permissions are not synced. All documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync rules
edit

Basic sync rules are identical for all connectors and are available by default.

Advanced sync rules are not available for this connector in the present version. Currently, filtering is controlled by ingest pipelines.

Self-managed connector operations
edit
End-to-end testing
edit

The connector framework enables operators to run functional tests against a real data source. Refer to Connector testing for more details.

To execute a functional test for the Oracle connector, run the following command:

make ftest NAME=oracle

By default, this will use a medium-sized dataset. To make the test faster add the DATA_SIZE=small argument:

make ftest NAME=oracle DATA_SIZE=small
Known issues
edit

There are no known issues for this connector.

See Known issues for any issues affecting all connectors.

Troubleshooting
edit

See Troubleshooting.

Security
edit

See Security.

Framework and source
edit

This connector is built with the Elastic connector framework.

This connector uses the generic database connector source code (branch 8.16, compatible with Elastic 8.16).

View additional code specific to this data source (branch 8.16, compatible with Elastic 8.16).