Introducing self-managed knowledge sources for Amazon OpenSearch Ingestion

Introducing self-managed knowledge sources for Amazon OpenSearch Ingestion


Enterprise clients more and more undertake Amazon OpenSearch Ingestion (OSI) to carry knowledge into Amazon OpenSearch Service for varied use circumstances. These embrace petabyte-scale log analytics, real-time streaming, safety analytics, and looking out semi-structured key-value or doc knowledge. OSI makes it easy, with simple integrations, to ingest knowledge from many AWS companies, together with Amazon DynamoDB, Amazon Easy Storage Service (Amazon S3), Amazon Managed Streaming for Apache Kafka (Amazon MSK), and Amazon DocumentDB (with MongoDB compatibility).

Right this moment we’re asserting help for ingesting knowledge from self-managed OpenSearch/Elasticsearch and Apache Kafka clusters. These sources can both be on Amazon Elastic Compute Cloud (Amazon EC2) or on-premises environments.

On this put up, we define the steps to get began with these sources.

Resolution overview

OSI helps the AWS Cloud Growth Package (AWS CDK), AWS CloudFormation, the AWS Command Line Interface (AWS CLI), Terraform, AWS APIs, and the AWS Administration Console to deploy pipelines. On this put up, we use the console to display methods to create a self-managed Kafka pipeline.

Stipulations

To ensure OSI can join and skim knowledge efficiently, the next situations must be met:

  • Community connectivity to knowledge sources – OSI is mostly deployed in a public community, such because the web, or in a digital non-public cloud (VPC). OSI deployed in a buyer VPC is ready to entry knowledge sources in the identical or completely different VPC and on the web with an connected web gateway. In case your knowledge sources are in one other VPC, frequent strategies for community connectivity embrace direct VPC peering, utilizing a transit gateway, or utilizing buyer managed VPC endpoints powered by AWS PrivateLink. In case your knowledge sources are in your company knowledge heart or different on-premises surroundings, frequent strategies for community connectivity embrace AWS Direct Join and utilizing a community hub like a transit gateway. The next diagram exhibits a pattern configuration of OSI operating in a VPC and utilizing Amazon OpenSearch Service as a sink. OSI runs in a service VPC and creates an Elastic Community interface (ENI) within the buyer VPC. For self-managed knowledge supply these ENIs are used for studying knowledge from on-premises surroundings. OSI creates an VPC endpoint within the service VPC to ship knowledge to the sink.
  • Identify decision for knowledge sources – OSI makes use of an Amazon Route 53 resolver. This resolver robotically solutions queries to names native to a VPC, public domains on the web, and data hosted in non-public hosted zones. Should you’re are utilizing a non-public hosted zone, be sure you have a DHCP possibility set enabled, connected to the VPC utilizing AmazonProvidedDNS as area title server. For extra info, see Work with DHCP possibility units. Moreover, you need to use resolver inbound and outbound endpoints in the event you want a fancy decision schemes with situations which can be past a easy non-public hosted zone.
  • Certificates verification for knowledge supply names – OSI helps solely SASL_SSL for transport for Apache Kafka supply. Inside SASL, Amazon OpenSearch Service helps most authentication mechanisms like PLAIN, SCRAM, IAM, GSAPI and others. When utilizing SASL_SSL, be sure you have entry to certificates wanted for OSI to authenticate. For self-managed OpenSearch knowledge sources, be sure that verifiable certificates are put in on the clusters. Amazon OpenSearch Service doesn’t help insecure communication between OSI and OpenSearch. Certificates verification can’t be turned off. Specifically, the “insecure” configuration possibility just isn’t supported.
  • Entry to AWS Secrets and techniques Supervisor – OSI makes use of AWS Secrets and techniques Supervisor to retrieve credentials and certificates wanted to speak with self-managed knowledge sources. For extra info, see Create and handle secrets and techniques with AWS Secrets and techniques Supervisor.
  • IAM function for pipelines – You want an AWS Id and Entry Administration (IAM) pipeline function to write down to knowledge sinks. For extra info, see Id and Entry Administration for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed Kafka as a supply

After you full the conditions, you’re able to create a pipeline to your knowledge supply. Full the next steps:

  1. On the OpenSearch Service console, select Pipelines below Ingestion within the navigation pane.
  2. Select Create pipeline.
  3. Select Streaming below Use case within the navigation pane.
  4. Choose Self managed Apache Kafka below Ingestion pipeline blueprints and select Choose blueprint.

This can populate a pattern configuration for this pipeline.

  1. Present a reputation for this pipeline and select the suitable pipeline capability.
  2. Beneath Pipeline configuration, present your pipeline configuration in YAML format. The next code snippet exhibits pattern configuration in YAML for SASL_SSL authentication:
    model: '2'
    kafka-pipeline:
      supply:
        kafka:
          acknowledgments: true
          bootstrap_servers:
            - 'node-0.instance.com:9092'
          encryption:
            kind: "ssl"
            certificates: '${{aws_secrets:kafka-cert}}'
            
          authentication:
            sasl:
              plain:
                username: '${{aws_secrets:secrets and techniques:username}}'
                password: '${{aws_secrets:secrets and techniques:password}}'
          matters:
            - title: on-prem-topic
              group_id: osi-group-1
      processor:
        - grok:
            match:
              message:
                - '%{COMMONAPACHELOG}'
        - date:
            vacation spot: '@timestamp'
            from_time_received: true
      sink:
        - opensearch:
            hosts: ["https://search-domain-12345567890.us-east-1.es.amazonaws.com"]
            aws:
              area: us-east-1
              sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
            index: "on-prem-kakfa-index"
    extension:
      aws:
        secrets and techniques:
          kafka-cert:
            secret_id: kafka-cert
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
          secrets and techniques:
            secret_id: secrets and techniques
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'

  1. Select Validate pipeline and make sure there are not any errors.
  2. Beneath Community configuration, select Public entry or VPC entry. (For this put up, we select VPC entry).
  3. Should you selected VPC entry, specify your VPC, subnets, and an acceptable safety group so OSI can attain the outgoing ports for the info supply.
  4. Beneath VPC attachment choices, choose Connect to VPC and select an acceptable CIDR vary.

OSI assets are created in a service VPC managed by AWS that’s separate from the VPC you selected within the final step. This choice lets you configure what CIDR ranges OSI ought to use inside this service VPC. The selection exists so you can also make positive there isn’t a tackle collision between CIDR ranges in your VPC that’s connected to your on-premises community and this service VPC. Many pipelines in your account can share identical CIDR ranges for this service VPC.

  1. Specify any non-compulsory tags and log publishing choices, then select Subsequent.
  2. Evaluation the configuration and select Create pipeline.

You’ll be able to monitor the pipeline creation and any log messages within the Amazon CloudWatch Logs log group you specified. Your pipeline ought to now be efficiently created. For extra details about methods to provision capability for the efficiency of this pipeline, see the part Beneficial Compute Items (OCUs) for the MSK pipeline in Introducing Amazon MSK as a supply for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed OpenSearch as a supply

The steps for making a pipeline for self-managed OpenSearch are just like the steps for creating one for Kafka. Throughout the blueprint choice, select Knowledge Migration below Use case and choose Self managed OpenSearch/Elasticsearch. OpenSearch Ingestion can supply knowledge from all variations of OpenSearch and Elasticsearch from model 7.0  to  model 7.10.

The next blueprint exhibits a pattern configuration YAML for this knowledge supply:

model: "2"
opensearch-migration-pipeline:
  supply:
    opensearch:
      acknowledgments: true
      hosts: [ "https://node-0.example.com:9200" ]
      username: "${{aws_secrets:secret:username}}"
      password: "${{aws_secrets:secret:password}}"
      indices:
        embrace:
        - index_name_regex: "opensearch_dashboards_sample_data*"
        exclude:
          - index_name_regex: '..*'
  sink:
    - opensearch:
        hosts: [ "https://search-domain-12345567890.us-east-1.es.amazonaws.com" ]
        aws:
          sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
          area: "us-east-1"
        index: "on-prem-os"
extension:
  aws:
    secrets and techniques:
      secret:
        secret_id: "self-managed-os-credentials"
        area: "us-east-1"
        sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
        refresh_interval: PT1H

Concerns for self-managed OpenSearch knowledge supply

Certificates put in on the OpenSearch cluster must be verifiable for OSI to hook up with this knowledge supply earlier than studying knowledge. Insecure connections are at present not supported.

After you’re linked, be sure that the cluster has ample learn bandwidth to permit for OSI to learn knowledge. Use the Min and Max OCU setting to restrict OSI learn bandwidth consumption. Your learn bandwidth will differ relying upon knowledge quantity, variety of indexes, and provisioned OCU capability. Begin small and improve the variety of OCUs to stability between accessible bandwidth and acceptable migration time.

This supply is usually meant for one-time migration of information and never as steady ingestion to maintain knowledge in sync between knowledge sources and sinks.

OpenSearch Service domains help distant reindexing, however that consumes assets in your domains. Utilizing OSI will transfer this compute out of the area, and OSI can obtain considerably increased bandwidth than distant reindexing, thereby leading to sooner migration instances.

OSI doesn’t help deferred replay or visitors recording right this moment; check with Migration Assistant for Amazon OpenSearch Service in case your migration wants these capabilities.

Conclusion

On this put up, we launched self-managed sources for OpenSearch Ingestion that allow you to ingest knowledge from company knowledge facilities or different on-premises environments. OSI additionally helps varied different knowledge sources and integrations. Confer with Working with Amazon OpenSearch Ingestion pipeline integrations to study these different knowledge sources.


Concerning the Authors

Muthu Pitchaimani is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search functions and options. Muthu is within the matters of networking and safety, and is predicated out of Austin, Texas.

Arjun Nambiar is a Product Supervisor with Amazon OpenSearch Service. He focuses on ingestion applied sciences that allow ingesting knowledge from all kinds of sources into Amazon OpenSearch Service at scale. Arjun is all for large-scale distributed techniques and cloud-centered applied sciences, and is predicated out of Seattle, Washington.

Leave a Reply

Your email address will not be published. Required fields are marked *