Diferenzas
Isto amosa as diferenzas entre a revisión seleccionada e a versión actual da páxina.
Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
inv:composit:validation [2013/02/28 18:20] – [Evaluation] pablo.rodriguez.mier | inv:composit:validation [2013/02/28 18:29] – [Usage] pablo.rodriguez.mier | ||
---|---|---|---|
Liña 75: | Liña 75: | ||
| WSC'08 08 | 8119 | 30 | 20 | 5.44 / 6.54 | 5 | 4 | | | WSC'08 08 | 8119 | 30 | 20 | 5.44 / 6.54 | 5 | 4 | | ||
- | |||
- | These tables show: the number of services of each dataset (column #Serv); the number of services of the optimal solution (column #Serv. Sol.); the length of the shortest solution (column #Length); the average number of inputs and outputs (column | ||
- | composite service (minimum number of services, minimum length) that satisfies the goal concepts, using only the initial inputs provided. | ||
Exact-Matching datasets were calculated by extending the outputs of each web service, including all superclasses of each output as an output of the service itself (semantic expansion). Thus, the average number of outputs is bigger than in the other datasets. The semantic expansion transforms a semantic matching problem into a exact matching problem, when exact and plug-in match is used to perform the semantic matchmaking. This allows us to test composition algorithms (that do not use semantic reasoners) with the WSC'08 datasets. For example, suppose that a service S1 provides the instance " | Exact-Matching datasets were calculated by extending the outputs of each web service, including all superclasses of each output as an output of the service itself (semantic expansion). Thus, the average number of outputs is bigger than in the other datasets. The semantic expansion transforms a semantic matching problem into a exact matching problem, when exact and plug-in match is used to perform the semantic matchmaking. This allows us to test composition algorithms (that do not use semantic reasoners) with the WSC'08 datasets. For example, suppose that a service S1 provides the instance " | ||
Liña 169: | Liña 166: | ||
Based | Based | ||
- | on these results, we selected the following optimal threshold //N// for each datasets: //N=1// for // | + | on these results, we selected the following optimal threshold //N// for each datasets: //N=1// for // |
^ Dataset | ^ Dataset | ||
Liña 226: | Liña 223: | ||
</ | </ | ||
Where algorithm.jar is one of the available algorithms: | Where algorithm.jar is one of the available algorithms: | ||
- | * CompositAlgorithm.jar ([[http:// | + | * ComposIT: |
- | * PorsceAlgorithm.jar ([[http:// | + | * PORSCE-II: |
- | * OWLSXplanAlgorithm.jar ([[http:// | + | * OWLS-Xplan: |
+ | <note important> | ||
+ | These versions of the OWLS-Xplan and PORSCE-II were modified to support the integration with the test platform. Original versions | ||
+ | of these algorithms can be downloaded here: | ||
+ | * PORSCE-II: http:// | ||
+ | * OWLS-Xplan 2.0: http:// | ||
+ | </ | ||
You can launch also a background test from the command line, with the following syntax: | You can launch also a background test from the command line, with the following syntax: | ||