Migrate from Apache Solr to OpenSearch


OpenSearch is an open supply, distributed search engine appropriate for a wide selection of use-cases similar to ecommerce search, enterprise search (content material administration search, doc search, data administration search, and so forth), website search, software search, and semantic search. It’s additionally an analytics suite that you should utilize to carry out interactive log analytics, real-time software monitoring, safety analytics and extra. Like Apache Solr, OpenSearch gives search throughout doc units. OpenSearch additionally consists of capabilities to ingest and analyze knowledge. Amazon OpenSearch Service is a completely managed service that you should utilize to deploy, scale, and monitor OpenSearch within the AWS Cloud.

Many organizations are migrating their Apache Solr based mostly search options to OpenSearch. The principle driving components embrace decrease whole price of possession, scalability, stability, improved ingestion connectors (similar to Information Prepper, Fluent Bit, and OpenSearch Ingestion), elimination of exterior cluster managers like Zookeeper, enhanced reporting, and wealthy visualizations with OpenSearch Dashboards.

We advocate approaching a Solr to OpenSearch migration with a full refactor of your search resolution to optimize it for OpenSearch. Whereas each Solr and OpenSearch use Apache Lucene for core indexing and question processing, the programs exhibit totally different traits. By planning and working a proof-of-concept, you possibly can guarantee one of the best outcomes from OpenSearch. This weblog publish dives into the strategic issues and steps concerned in migrating from Solr to OpenSearch.

Key variations

Solr and OpenSearch Service share basic capabilities delivered by way of Apache Lucene. Nonetheless, there are some key variations in terminology and performance between the 2:

  • Assortment and index: In OpenSearch, a set is known as an index.
  • Shard and reproduction: Each Solr and OpenSearch use the phrases shard and reproduction.
  • API-driven Interactions: All interactions in OpenSearch are API-driven, eliminating the necessity for guide file modifications or Zookeeper configurations. When creating an OpenSearch index, you outline the mapping (equal to the schema) and the settings (equal to solrconfig) as a part of the index creation API name.

Having set the stage with the fundamentals, let’s dive into the 4 key elements and the way every of them might be migrated from Solr to OpenSearch.

Assortment to index

A set in Solr is known as an index in OpenSearch. Like a Solr assortment, an index in OpenSearch additionally has shards and replicas.

Though the shard and reproduction idea is comparable in each the major search engines, you should utilize this migration as a window to undertake a greater sharding technique. Measurement your OpenSearch shards, replicas, and index by following the shard technique greatest practices.

As a part of the migration, rethink your knowledge mannequin. In analyzing your knowledge mannequin, you could find efficiencies that dramatically enhance your search latencies and throughput. Poor knowledge modeling doesn’t solely lead to search efficiency issues however extends to different areas. For instance, you may discover it difficult to assemble an efficient question to implement a selected function. In such circumstances, the answer usually entails modifying the information mannequin.

Variations: Solr permits main shard and reproduction shard collocation on the identical node. OpenSearch doesn’t place the first and reproduction on the identical node. OpenSearch Service zone consciousness can mechanically be sure that shards are distributed to totally different Availability Zones (knowledge facilities) to additional improve resiliency.

The OpenSearch and Solr notions of reproduction are totally different. In OpenSearch, you outline a main shard rely utilizing number_of_primaries that determines the partitioning of your knowledge. You then set a reproduction rely utilizing number_of_replicas. Every reproduction is a replica of all the first shards. So, in the event you set number_of_primaries to five, and number_of_replicas to 1, you’ll have 10 shards (5 main shards, and 5 reproduction shards). Setting replicationFactor=1 in Solr yields one copy of the information (the first).

For instance, the next creates a set referred to as check with one shard and no replicas.

http://localhost:8983/solr/admin/collections?
  _=motion=CREATE
  &maxShardsPerNode=2
  &title=check
  &numShards=1
  &replicationFactor=1
  &wt=json

In OpenSearch, the next creates an index referred to as check with 5 shards and one reproduction

PUT check
{
  "settings": {
    "number_of_shards": 5,
    "number_of_replicas": 1
  }
}

Schema to mapping

In Solr schema.xml OR managed-schema has all the sphere definitions, dynamic fields, and replica fields together with subject kind (textual content analyzers, tokenizers, or filters). You employ the schema API to handle schema. Or you possibly can run in schema-less mode.

OpenSearch has dynamic mapping, which behaves like Solr in schema-less mode. It’s not essential to create an index beforehand to ingest knowledge. By indexing knowledge with a brand new index title, you create the index with OpenSearch managed service default settings (for instance: "number_of_shards": 5, "number_of_replicas": 1) and the mapping based mostly on the information that’s listed (dynamic mapping).

We strongly advocate you go for a pre-defined strict mapping. OpenSearch units the schema based mostly on the primary worth it sees in a subject. If a stray numeric worth is the primary worth for what can be a string subject, OpenSearch will incorrectly map the sphere as numeric (integer, for instance). Subsequent indexing requests with string values for that subject will fail with an incorrect mapping exception. You realize your knowledge, you understand your subject varieties, you’ll profit from setting the mapping immediately.

Tip: Contemplate performing a pattern indexing to generate the preliminary mapping after which refine and tidy up the mapping to precisely outline the precise index. This method helps you keep away from manually establishing the mapping from scratch.

For Observability workloads, it is best to think about using Easy Schema for Observability. Easy Schema for Observability (also called ss4o) is a customary for conforming to a typical and unified observability schema. With the schema in place, Observability instruments can ingest, mechanically extract, and mixture knowledge and create customized dashboards, making it simpler to grasp the system at a better stage.

Lots of the subject varieties (knowledge varieties), tokenizers, and filters are the identical in each Solr and OpenSearch. In any case, each use Lucene’s Java search library at their core.

Let’s take a look at an instance:

PUT index_from_solr
{
  "settings": {
    "evaluation": {
      "analyzer": {
        "text_general": {
          "kind": "customized",
          "tokenizer": "customary",
          "filter": [
            "lowercase",
            "asciifolding"
          ]
        }
      }
    }
  },
  "mappings": {
    "properties": {
      "title": {
        "kind": "key phrase",
        "copy_to": "textual content"
      },
      "deal with": {
        "kind": "textual content",
        "analyzer": "text_general"
      },
      "user_token": {
        "kind": "key phrase",
        "index": false
      },
      "age": {
        "kind": "integer"
      },
      "last_modified": {
        "kind": "date"
      },
      "metropolis": {
        "kind": "textual content",
        "analyzer": "text_general"
      },
      "textual content": {
        "kind": "textual content",
        "analyzer": "text_general"
      }
    }
  }
}

Notable issues in OpenSearch in comparison with Solr:

  1. _id is all the time the uniqueKey and can’t be outlined explicitly, as a result of it’s all the time current.
  2. Explicitly enabling multivalued isn’t essential as a result of any OpenSearch subject can include zero or extra values.
  3. The mapping and the analyzers are outlined throughout index creation. New fields might be added and sure mapping parameters might be up to date later. Nonetheless, deleting a subject isn’t doable. A useful ReIndex API can overcome this drawback. You should utilize the Reindex API to index knowledge from one index to a different.
  4. By default, analyzers are for each index and question time. For some less-common situations, you possibly can change the question analyzer at search time (within the question itself), which is able to override the analyzer outlined within the index mapping and settings.
  5. Index templates are additionally a good way to initialize new indexes with predefined mappings and settings. For instance, in the event you constantly index log knowledge (or any time-series knowledge), you possibly can outline an index template so that each one the indices have the identical variety of shards and replicas. It will also be used for dynamic mapping management and part templates

Search for alternatives to optimize the search resolution. For example, if the evaluation reveals that town subject is solely used for filtering relatively than looking, contemplate altering its subject kind to key phrase as a substitute of textual content to remove pointless textual content processing. One other optimization may contain disabling doc_values for the user_token subject if it’s solely meant for show functions. doc_values are disabled by default for the textual content datatype.

SolrConfig to settings

In Solr, solrconfig.xml carries the gathering configuration. All types of configurations pertaining to all the pieces from index location and formatting, caching, codec manufacturing facility, circuit breaks, commits and tlogs all the best way as much as gradual question config, request handlers, and replace processing chain, and so forth.

Let’s take a look at an instance:


`BEST_COMPRESSION`



    ${solr.autoCommit.maxTime:15000}
    false



    ${solr.autoSoftCommit.maxTime:-1}
    

1000

${solr.max.booleanClauses:2048}


    
    specific
    json
    true
    textual content
    











Notable issues in OpenSearch in comparison with Solr:

  1. Each OpenSearch and Solr have BEST_SPEED codec as default (LZ4 compression algorithm). Each supply BEST_COMPRESSION in its place. Moreover OpenSearch gives zstd and zstd_no_dict. Benchmarking for various compression codecs can also be accessible.
  2. For close to real-time search, refresh_interval must be set. The default is 1 second which is sweet sufficient for many use circumstances. We advocate growing refresh_interval to 30 or 60 seconds to enhance indexing velocity and throughput, particularly for batch indexing.
  3. Max boolean clause is a static setting, set at node stage utilizing the indices.question.bool.max_clause_count setting.
  4. You don’t want an specific requestHandler. All searches use the _search or _msearch endpoint. Should you’re used to utilizing the requestHandler with default values then you should utilize search templates.
  5. Should you’re used to utilizing /sql requestHandler, OpenSearch additionally helps you to use SQL syntax for querying and has a Piped Processing Language.
  6. Spellcheck, also called Did-you-mean, QueryElevation (often known as pinned_query in OpenSearch), and highlighting are all supported throughout question time. You don’t have to explicitly outline search elements.
  7. Most API responses are restricted to JSON format, with CAT APIs as the one exception. In circumstances the place Velocity or XSLT is utilized in Solr, it should be managed on the applying layer. CAT APIs reply in JSON, YAML, or CBOR codecs.
  8. For the updateRequestProcessorChain, OpenSearch gives the ingest pipeline, permitting the enrichment or transformation of information earlier than indexing. A number of processor phases might be chained to type a pipeline for knowledge transformation. Processors embrace GrokProcessor, CSVParser, JSONProcessor, KeyValue, Rename, Break up, HTMLStrip, Drop, ScriptProcessor, and extra. Nonetheless, it’s strongly beneficial to do the information transformation outdoors OpenSearch. The best place to do this could be at OpenSearch Ingestion, which gives a correct framework and numerous out-of-the-box filters for knowledge transformation. OpenSearch Ingestion is constructed on Information Prepper, which is a server-side knowledge collector able to filtering, enriching, reworking, normalizing, and aggregating knowledge for downstream analytics and visualization.
  9. OpenSearch additionally launched search pipelines, just like ingest pipelines however tailor-made for search time operations. Search pipelines make it simpler so that you can course of search queries and search outcomes inside OpenSearch. At present accessible search processors embrace filter question, neural question enricher, normalization, rename subject, scriptProcessor, and personalize search rating, with extra to return.
  10. The next picture exhibits set refresh_interval and slowlog. It additionally exhibits you the opposite doable settings.
  11. Gradual logs might be set like the next picture however with far more precision with separate thresholds for the question and fetch phases.

Earlier than migrating each configuration setting, assess if the setting might be adjusted based mostly in your present search system expertise and greatest practices. For example, within the previous instance, the gradual logs threshold of 1 second is likely to be intensive for logging, so that may be revisited. In the identical instance, max.booleanClauses is likely to be one other factor to have a look at and scale back.

Variations: Some settings are finished on the cluster stage or node stage and never on the index stage. Together with settings similar to max boolean clause, circuit breaker settings, cache settings, and so forth.

Rewriting queries

Rewriting queries deserves its personal weblog publish; nevertheless we need to a minimum of showcase the autocomplete function accessible in OpenSearch Dashboards, which helps ease question writing.

Much like the Solr Admin UI, OpenSearch additionally incorporates a UI referred to as OpenSearch Dashboards. You should utilize OpenSearch Dashboards to handle and scale your OpenSearch clusters. Moreover, it gives capabilities for visualizing your OpenSearch knowledge, exploring knowledge, monitoring observability, working queries, and so forth. The equal for the question tab on the Solr UI in OpenSearch Dashboard is Dev Instruments. Dev Instruments is a growth surroundings that allows you to arrange your OpenSearch Dashboards surroundings, run queries, discover knowledge, and debug issues.

Now, let’s assemble a question to perform the next:

  1. Seek for shirt OR shoe in an index.
  2. Create a side question to search out the variety of distinctive clients. Aspect queries are referred to as aggregation queries in OpenSearch. Also called aggs question.

The Solr question would appear to be this:

http://localhost:8983/solr/solr_sample_data_ecommerce/choose?q=shirt OR shoe
  &side=true
  &side.subject=customer_id
  &side.restrict=-1
  &side.mincount=1
  &json.side={
   unique_customer_count:"distinctive(customer_id)"
  }

The picture beneath demonstrates re-write the above Solr question into an OpenSearch question DSL:

Conclusion

OpenSearch covers all kinds of makes use of circumstances, together with enterprise search, website search, software search, ecommerce search, semantic search, observability (log observability, safety analytics (SIEM), anomaly detection, hint analytics), and analytics. Migration from Solr to OpenSearch is turning into a typical sample. This weblog publish is designed to be a place to begin for groups searching for steering on such migrations.

You possibly can check out OpenSearch with the OpenSearch Playground. You possibly can get began with Amazon OpenSearch Service, a managed implementation of OpenSearch within the AWS Cloud.


Concerning the Authors

Aswath Srinivasan is a Senior Search Engine Architect at Amazon Internet Providers at present based mostly in Munich, Germany. With over 17 years of expertise in numerous search applied sciences, Aswath at present focuses on OpenSearch. He’s a search and open-source fanatic and helps clients and the search neighborhood with their search issues.

Jon Handler is a Senior Principal Options Architect at Amazon Internet Providers based mostly in Palo Alto, CA. Jon works intently with OpenSearch and Amazon OpenSearch Service, offering assist and steering to a broad vary of shoppers who’ve search and log analytics workloads that they need to transfer to the AWS Cloud. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, ecommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a PhD in Pc Science and Synthetic Intelligence from Northwestern College.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles