掲示板

Requesting a remote DS on a distinct Karaf container

thumbnail
6年前 に Fabian Bouché によって更新されました。

Requesting a remote DS on a distinct Karaf container

New Member 投稿: 12 参加年月日: 16/04/08 最新の投稿
Hello guys!

I had this idea while attending the Paris Liferay Symposium and I just got time to give it a try.

This is a work in progress that I'd like to share with you so that I can get some feedback.

Let's consider I'm developping business services that I want to request both from my Liferay portlets and from some other completely different components inside of my information system.

In a not-so-business-critical environment, I'd be ok with Liferay being the center of my information system, exposing services to other components of my IS.

But in my configuration, I don't want the actual business services implementation to be deployed in the Liferay server. This is mainly for security reasons (Liferay = presentation layer, whereas business services belong to a business layer).

Moreoever, I cannot allow myself to have all my information system relying on services exposed by Liferay, however great Liferay is! emoticon

What I usually do in my day-to-day life is to expose those business services as SOAP or REST and to implement a client inside of Liferay. That was not easy in the new Liferay OSGi framework but we got it running. This approach leads to write quite a bunch of boilerplate code.

Thus, I was eager to retain that plug-and-play feel of declarative services while being able to have the service implementation running outside of the Liferay OSGi container.

This has led me to discover this: Apache Aries-rsa

The Aries Remote Service Admin (RSA) project allows to transparently use OSGi services for remote communication. OSGi services can be marked for export by adding a service property service.exported.interfaces=*. Various other properties can be used to customize how the service is to be exposed.


I first had to pick some OSGi container and went for Apache Karaf.
I've setup the 4.1.1 version.
The idea is to have a service implementation running inside of this container.
The aries-rsa project provides some sample echo service: I followed the "service" part of the Readme.MD.

Now, my idea was to transpose the "client" part of that example inside of Liferay. I've picked Lifeay 7 GA3.
I first wanted to use the Karaf compatibility layer provided by Milen but I did not get it to work (the "feature" capability would have been useful).

So I went for manual bundle deployment and this is what I ended up deploying to the Liferay server:

  485|Active     |   10|Aries Remote Service Admin provider TCP (1.10.0)
  486|Active     |   10|Aries Remote Service Admin SPI (1.10.0)
  487|Active     |   10|org.osgi:org.osgi.service.remoteserviceadmin (1.1.0.201505202024)
  488|Active     |   10|Aries Remote Service Admin Discovery Zookeeper (1.10.0)
  489|Active     |   10|Aries Remote Service Admin Discovery Local (1.10.0)
  490|Active     |   10|ZooKeeper Bundle (3.4.10)
  492|Active     |   10|Testportlet (1.0.0.201706191417)
  493|Active     |   10|org.apache.aries.rsa.examples.echotcp.api (1.11.0.SNAPSHOT)
  495|Active     |   10|Aries Remote Service Admin Core (1.11.0.SNAPSHOT)
  496|Active     |   10|Aries Remote Service Admin Event Publisher (1.11.0.SNAPSHOT)
  497|Active     |   10|Aries Remote Service Admin Topology Manager (1.11.0.SNAPSHOT)


I got bundles 485-489 from maven central.
I got 490 from the Apache Zookeeper website.
I built 493-497 by myself from the latest snapshot.

And I've finally built a very simple portlet including that code:

package sample.testportlet.portlet;

import java.io.IOException;

import javax.portlet.Portlet;
import javax.portlet.PortletException;
import javax.portlet.RenderRequest;
import javax.portlet.RenderResponse;

import org.apache.aries.rsa.examples.echotcp.api.EchoService;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

import com.liferay.portal.kernel.portlet.bridges.mvc.MVCPortlet;

import sample.testportlet.constants.Constants;

@Component(
	immediate = true,
	property = {
		"com.liferay.portlet.display-category=category.sample",
		"com.liferay.portlet.instanceable=true",
		"javax.portlet.display-name=Testportlet Portlet",
		"javax.portlet.init-param.template-path=/",
		"javax.portlet.init-param.view-template=/view.jsp",
		"javax.portlet.resource-bundle=content.Language",
		"javax.portlet.security-role-ref=power-user,user"
	},
	service = Portlet.class
)
public class TestportletPortlet extends MVCPortlet {
	

    @Override
    public void render(RenderRequest renderRequest, RenderResponse renderResponse)
    		throws IOException, PortletException {
    	
    	String msg = echoService.echo("Toto");
    	renderRequest.setAttribute(Constants.ECHO_MESSAGE, msg);
    	
    	super.render(renderRequest, renderResponse);
    }
    
    EchoService echoService;
    @Reference(unbind="-")
    public void setEchoService(EchoService echoService) {
        this.echoService = echoService;
    }	

}


And the view.jsp displaying that service response:

<%@ include file="/init.jsp" %>

<h1>Message: ${echoMessage}</h1>
<p>
	<b><liferay-ui:message key="Testportlet.caption" /></b>
</p>


Last thing, I had to put an empty file named org.apache.aries.rsa.discovery.zookeeper.cfg inside of osgi/configs in order to trigger endpoints discovery.

And that's it! All my Liferay server needs to know is the service's API while the implementation runs inside of another container.
If Liferay is not able to find a suitable implementation, the portlet component will just not register itself (some logging would be nice here to catch the @Reference failures, if you guys have some ideas).

You'll see those nice logs in the Liferay logging console:

15:53:15,624 INFO  [Thread-54][InterfaceMonitorManager:202] calling EndpointListener.endpointAdded: org.apache.aries.rsa.topologymanager.importer.TopologyManagerImport@22761d16 from bundle org.apache.aries.rsa.topology-manager for endpoint: {aries.rsa.port=8201, component.id=2, component.name=org.apache.aries.rsa.examples.echotcp.service.EchoServiceImpl, endpoint.framework.uuid=98d78988-d4ea-4163-a6ae-c51e08fa9eee, endpoint.id=tcp://192.168.1.96:56691, endpoint.package.version.org.apache.aries.rsa.examples.echotcp.api=1.0.0, endpoint.service.id=131, objectClass=[org.apache.aries.rsa.examples.echotcp.api.EchoService], service.bundleid=67, service.imported=true, service.imported.configs=[aries.tcp], service.scope=bundle}
15:53:15,626 INFO  [pool-16-thread-1][RemoteServiceAdminCore:382] Importing service tcp://192.168.1.96:56691 with interfaces [org.apache.aries.rsa.examples.echotcp.api.EchoService] using handler class org.apache.aries.rsa.provider.tcp.TCPProvider.
15:53:15,632 INFO  [Thread-54][BundleStartStopLogger:35] STARTED org.apache.aries.rsa.topology-manager_1.11.0.SNAPSHOT [497]
15:53:17,361 INFO  [pool-16-thread-1][InterfaceMonitorManager:202] calling EndpointListener.endpointAdded: org.apache.aries.rsa.topologymanager.importer.TopologyManagerImport@22761d16 from bundle org.apache.aries.rsa.topology-manager for endpoint: {aries.rsa.port=8201, component.id=2, component.name=org.apache.aries.rsa.examples.echotcp.service.EchoServiceImpl, endpoint.framework.uuid=98d78988-d4ea-4163-a6ae-c51e08fa9eee, endpoint.id=tcp://192.168.1.96:56691, endpoint.package.version.org.apache.aries.rsa.examples.echotcp.api=1.0.0, endpoint.service.id=131, objectClass=[org.apache.aries.rsa.examples.echotcp.api.EchoService], service.bundleid=67, service.imported=true, service.imported.configs=[aries.tcp], service.scope=bundle}


On the Karaf side, I've got this:

2017-06-19T17:51:10,692 | INFO  | NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181 | NIOServerCnxnFactory             | 60 - org.apache.hadoop.zookeeper - 3.4.7 | Accepted socket connection from /127.0.0.1:59858
2017-06-19T17:51:10,694 | INFO  | NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181 | ZooKeeperServer                  | 60 - org.apache.hadoop.zookeeper - 3.4.7 | Client attempting to establish new session at /127.0.0.1:59858
2017-06-19T17:51:10,714 | INFO  | SyncThread:0     | ZooKeeperServer                  | 60 - org.apache.hadoop.zookeeper - 3.4.7 | Established session 0x15cc0070dc10008 with negotiated timeout 4000 for client /127.0.0.1:59858


I tested it in a very simple configuration: both OSGi containers (Liferay and Karaf) were running on my workstation and I went for all default ports.
The next step would be to run this on distinct machines and add some additional nodes to see how it behaves in a high-availability setup.

Notice: better start the "Aries Remote Service Admin Topology Manager" after the Liferay server has started. Otherwise, the server goes postal (just gogo stop that and restart it when the dust has settled)! It should be possible to do some tweaking here.

Best regards (and I hope some of you will find this useful),
Fabian
thumbnail
6年前 に Christoph Rabel によって更新されました。

RE: Requesting a remote DS on a distinct Karaf container

Liferay Legend 投稿: 1554 参加年月日: 09/09/24 最新の投稿
That's really cool.

We did something like that too in 6.2 :

What I usually do in my day-to-day life is to expose those business services as SOAP or REST and to implement a client inside of Liferay. That was not easy in the new Liferay OSGi framework but we got it running. This approach leads to write quite a bunch of boilerplate code.


We have lots of Service Builder entities which are just empty shells just to provide remote services. But we need to plan an upgrade to DXP soon (maybe early next year) and the current idea was to just throw SB out. I thought that it shouldn't be too difficult. What problems did you face? In Liferay or was it the connection to the backend?

We don't want to touch the backend if possible, so it would be nice to be able to keep the rest and soap interfaces.
thumbnail
6年前 に Fabian Bouché によって更新されました。

RE: Requesting a remote DS on a distinct Karaf container

New Member 投稿: 12 参加年月日: 16/04/08 最新の投稿
Hello Christoph,

Let me provide you with a high-level picture of our journey and the difficulties we met.

Whereas we used to embed all the cxf jars as we built a SOAP/REST client inside of one of our portlets, we had to change our approach when we had to migrate to Liferay DXP.

Although I think you don't necessarily have to go that route (as far as I know it's still possible to deploy portlet wars in the liferay plugin framework), we decided to port our portlets as OSGi modules.
The cause of our difficulties was that Liferay itself embeds a bunch of CXF OSGi bundles and our former CXF embedding strategy did not work as the embedded jars were conflicting with those bundles.
Moreover, my feeling is that OSGi encourages you to structure your bundles so that they export / import packages to one another and to avoid library duplication. It's still possible to embed jars (see that article from David H Nebinger that helped me a lot at the beginning: https://web.liferay.com/fr/web/user.26526/blog/-/blogs/osgi-module-dependencies) but whereas you're used to have maven handle all the chained dependencies, you'll have to add them all by hand when you use Include-Resource (and in the case of CXF, it's a nightmare).

We first tried this strategy: https://fr.slideshare.net/amusarra/liferay-7-come-realizzare-un-client-soap-con-apache-cxf-in-osgi-style.

And we finally went for an alternative where we would take advantage of the already included CXF OSGi bundles. We created an OSGi fragment bundle to add export-package directives to all packages that are required to build a SOAP client.
We hit one additional issue: https://issues.liferay.com/browse/LPS-67253. I chose an alternative solution that consisted in registering a ProviderImpl as a declarative service in order to avoid creating a dummy CXF endpoint as suggested in the discussions.

With more understanding of the OSGi architecture, we finally decided to build an OSGi module implementing the SOAP client that would export its capabilities to the portlet modules. In order to have a framework to guide us, we used the Service Builder archetype with empty-shell entities.

I recently had to do the same job for REST clients. I followed the same route but whereas everything was available inside of the Liferay included CXF bundles, I remember I had to add an additional CXF OSGi module to have all the cast.
thumbnail
3年前 に Antonio Musarra によって更新されました。

RE: Requesting a remote DS on a distinct Karaf container

Junior Member 投稿: 66 参加年月日: 11/08/09 最新の投稿
Hi!
A few days ago I published a similar work on OSGi Remote Services. All the details are on this GitHub repository https://github.com/smclab/docker-osgi-remote-services-example