Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search.
Here why I'm trying to integrate Solr search with ADF Faces is, take a scenario where we need to build an EBook application having large data set. Each book may consists some 200 pages, If user wants to search some content inside the book or in cross books. It's very difficult to search in which page of the book the content exits and more over it will be overhead on DB server. By using Solr we can avoid this and do content indexing and do full-text also.
In Part 1, will learn on how to do content indexing. In my next post - Part 2, I will show how full-text search can be integrated with ADF application. Follow the article to install Solr server n windows, I have installed the Apache Solr 4.0 version, after the complete installation of jdk, tomcat, solr we can see the screen as follows, click on the collection1 schema and notice Num Docs will be 0.
Next download the SolrJ version 4.0, Solrj is a java client to access solr. It offers a java interface to add, update, and query the solr index.
Model Diagram: Download the sql script.
Create Fusion Web Application with entities based on category and product tables. Create a session bean and java client, add the following jar in "Project Properties-> Libraries and ClassPath":
Run the java client, all the records will get indexed into solr server. Now we can see on Solr admin homepage for collection1 schema 40 doc are added.
Here why I'm trying to integrate Solr search with ADF Faces is, take a scenario where we need to build an EBook application having large data set. Each book may consists some 200 pages, If user wants to search some content inside the book or in cross books. It's very difficult to search in which page of the book the content exits and more over it will be overhead on DB server. By using Solr we can avoid this and do content indexing and do full-text also.
In Part 1, will learn on how to do content indexing. In my next post - Part 2, I will show how full-text search can be integrated with ADF application. Follow the article to install Solr server n windows, I have installed the Apache Solr 4.0 version, after the complete installation of jdk, tomcat, solr we can see the screen as follows, click on the collection1 schema and notice Num Docs will be 0.
Next download the SolrJ version 4.0, Solrj is a java client to access solr. It offers a java interface to add, update, and query the solr index.
Model Diagram: Download the sql script.
Next is to add the fields that will be indexed in Solr server, open the C:\solr\collection1\conf\schema.xml and alert the schema file by adding the below fields under <fields> tag. Before adding below fields shutdown the tomcat server.
Notice few fields will be already there, so add only missing fields then start the tomcat server and try to access the Solr admin. If the page doesn't load properly then schema file has some issue.
- apache-solr-solrj-4.0.0
- commons-codec-1.3
- commons-httpclient-3.1
- commons-io-2.1
- jcl-over-slf4j-1.6.4
- slf4j-api-1.6.4
- slf4j-jdk14-1.6.4
- solr-solrj-1.4.0
All the above jar will be present in downloaded "apache-solr-4.0.0\dist\solrj-lib" directory, if you can't find all jar. You can download from the link.
Open the Java Client, add the below code.
private static void printProduct(Product product) throws MalformedURLException, SolrServerException, IOException { //Ip address is hard corded, where the Solr server is installed SolrServer server = new CommonsHttpSolrServer("http://10.177.252.178:8080/solr"); SolrInputDocument doc = new SolrInputDocument(); doc.addField("id", product.getId()); doc.addField("title", product.getTitle()); doc.addField("category", product.getCategoryRef().getName()); doc.addField("productby", product.getProductBy()); doc.addField("price", product.getPrice()); doc.addField("description", product.getDescription()); String features = product.getFeatures().replaceAll(";", " "); doc.addField("features", features); System.out.println("Content Indexing Started for Id " + product.getId()); server.add(doc); server.commit(); System.out.println("Content Indexing Completed for Id " + product.getId()); }
Run the java client, all the records will get indexed into solr server. Now we can see on Solr admin homepage for collection1 schema 40 doc are added.