Open Call FAQs


What type of Open Call F-Interop will offer?


An Open call with dedicated funding to support projects in three categories:
  • New testing tools: F-Interop will select proposals to develop new testing tools that extend the F-Interop platform capabilities;
  • New test design: F-Interop will select proposals to develop new interoperability tests designs and specifications based on F-Interop existing tools
    and framework;
  • SME F-Interop assessment reports: F-Interop will allocate small grants to SMEs, enabling them to test the F-Interop platform and to provide a written report on potential improvements;
  • Plugtest Events: F-Interop will select third parties to conduct three remote online plugtest events.

What standardization community is supported by F-Interop?

This call will target standardization communities including but not limited to ETSI, IETF, ITU, IEEE, OGC and W3C communities.


What categories of persons are supported?

Financial support in the F-Interop open calls programme may be provided to persons from:
  • Single European mid-caps, SMEs and Micro SMEs as defined in EU law: EU recommendation 2003/361
  • Web enterpreneurs and individual sole-traders
  • European secondary and higher education establishments, research institutes and other not-for-profit research organisations
  • Standards bodies such as IETF, ITU, IEEE and W3C
  • Each of these must be established in a EU Member State, in an Associated Country or in a country that contributes substantially to the financing of F-Interop research project.
 F-Interop encourages applications from third parties resident in eligible Eastern Europe countries.

How can I find information about testbed facilities we can use for testing?
What is their status? How I will be able to access them using F-Interop platform?

The following testbeds and devices will be federated and made available for experimentation in the F-Interop platform.

  • Fed4FIRE ( is a federation of (at the time of writing) 24 FIRE+ testbeds spread over Europe, bringing together technologies such as cloud, IoT/wireless/wireless mobile, LTE, cognitive radio, 5G, openflow, SDN, NFV, network emulation, all accessible through the same toolset and account. All testbeds speak the same API (Aggregate Manager API, XMLRPC based) and are monitored continuously on their availability. The total number of nodes (physical servers, physical wireless devices, physical switches) is around 1000.
  • OneLab ( is an experimental facility made up of a federation of future Internet testbeds, which together offer large-scale experimentation across heterogeneous resources federated through a single-access portal. The federation includes FIT IoT-Lab (embedded object testbeds), FIT CorteXlab (cognitive radio testbed), NITOS-Lab (wireless testbeds), PlanetLab Europe (Internet overlay testbed
  • IoT lab ( federates several IoT testbeds, including the Universities of Surrey, of Patras (CTI), of Geneva and MI. IoT Lab is extending this federation of testbeds with new crowd sourcing and crowd sensing tools based on a secured smart phone application.



All testbeds are already functional. While the information about available devices might be important for proposals applicants in order to identify requirements for their project, however access to different resources will be abstracted and made available to experimenters through the F-Interop Testbed as a Service (TBaaS) APIs, thus resulting agnostic from the different location and testbed deployment. Selected third parties will not be required to develop any core modules required to connect to the federated devices, however, in the occasion that new devices and testbed sites integration will be proposed and required by selected proposals, such work is expected to be performed by selected third parties, using the provided architecture northbound APIs (aka Testing Tools APIs).

It is not a requirement for applicants to specify the devices they will need to access at the time of submitting a proposal. However if enough information to identify the target testbed are available applicants should state so in the their application; this will help the F-Interop consortium to identify the right internal partners to assign to the proposal if successful, in order to provide useful support with any integration issue might arises during the project.

Main driver for identifying a testbed will be the type of devices available in each testbed as well as the kind of protocols supported (in particular for proposal in the new test design category). If a proposal focuses on developing more generic tools using the provided Testing Tools APIs (northbound APIs), identification of any testbed or required device category is not required.


How can I find information about existing F-Interop tools?

Information about available F-Interop tools can be found and are constantly updated in the F-Interop Tools and Experiments page


Are there any restrictions on platform usage?

The F-Interop platform should not be used for any unethical uses and activities.


Can you describe what a test description is?

Test descriptions (also called test specifications) are guidelines which describe the test topology (nodes participating on the tests, auxiliary components needed to execute the tests and also how are they interconnected,etc.), test configurations (relevant parameters of the different layers of the protocol stack, etc.) and detailed test cases description (tables describing the objective of the test cases, reference documents, reference to the test configurations used, and the test sequence).

F-Interop adopts ETSI format/templates for the test descriptions.


Examples of test descriptions can be found in here:

F-Interop defines a test description format (yaml based) called text extended description (TED) which uses test descriptions as a base for defining the interoperability test cases.



Can you describe what a test analysis script is?

A test analysis script is a script part of the testing tool (which may sit on top of a testing tool) that implements the analysis of a particular test case execution/trace.
F-Interop doesn’t impose any particular language for this scripts, nor any particular methodology for doing the analysis (step by step analysis, post mortem test analysis,etc).
An example of a test analysis script for CoAP interoperability testing, used by one of the provided testing tools called ttproto ( is presented below:


What is the expected reasonable proposal length for each category? 

Each proposal is expected to have the following per category maximum length:

  • Category A: 30 pages max
  • Category B: 20 pages max
  • Category C: 10 pages max
  • Category D: 10 pages max

Are there specific requirements that each proposal submission should fulfil? 

Yes, a list of requirements per category is provided below.


General Requirements: 

For proposals in categories A and B aiming to develop technical contributions it is expected that the developed solutions should be made available as open source SW and support framework for automatically deploying them (e.g., git, etc);

The proposers should demonstrate a minimum level of expertise required to fully deliver the expected proposal contributions. Adherence to requirements should be provided in Section 3.3 (Company Description) of the application template (
In particular:

  • For categories A and B, to integrate the developed solutions with existing F-Interop Core Platform architecture, we welcome proven practical experience with the considered protocols and standards as well as experience with MoM architectures, REST APIs and FIRE/Fed4Fire framework. This might include participations to previous relevant projects as well as relevant publications and other standardization activities;
  • For category C experience in providing usability test and knowledge of relevant methods (remote usability test, expert reviews, etc) should be highlighted;
  • For category D ability to engage with targeted communities should be provided; example of this including active participation to previous plugfest for those communities, participations to standardization activities, working groups, managing of social media channels etc;

Proposal number of applicants is not limited and subject to funding allocation performed by the proposal coordinator, with ideal number of participants per proposal being 2 and maximum being 3 for categories A and B and 1 for C and D.

Category A:

Proposed new tools should provide a clear explanation of advancement brought with respect to existing tools already available in the F-Interop platform;
Tools might include either SW or HW ones or a combination of both. Example of tools might include:

  • SW tools providing implementation of the northbound F-Interop core platform APIs (Testing Tools APIs) in order to deploy tests, analyse results, visualize outcome for new protocols currently not supported in the F-Interop platform. Example of this might include but are not limited to extension to current Web of Things supported functionalities, 6LoWPAN, oneM2M, as well as new protocols such as any in the suite of Low Power Wide Area ones, Bluetooth Low Energy and the like;
  • SW tools providing implementation of the F-Interop core platform  southbound APIs (aka Implementation Under Test APIs) in order to integrate new Internet of Things devices providing Implementation Under Test (IUT) for new protocols. Examples are 1) modification/adjustment of agent components for the new testing tool, conditioned also to the location models they are targeting; or 2) implementation of "proxies" for interfacing target IUTs (new IoT devices) to the F-interop core platform - e.g. radio packet sniffers/injectors for currently not supported radio technologies, such as Bluetooth Low Energy (BLE), 802.11, etc;
  • SW tools for guaranteeing privacy and confidentiality of the generated test data, whether they provide clearly explained advancement with respect to the functionalities already provided by the F-Interop platform.
  • HW and SW tools for increasing the security and validity of the message generated during each test session;
  • SW tools to issue of certifications for passed tests and devices.

Proposals in this Category are expected to be maximum 30 pages in length.


Category B:

  • Proposed new testing scenarios (and respective test description, test scripts, and test analysis) should provide a clear explanation of advancement with respect to already covered scenarios, for the already supported protocols;
  • Proposed test designs should include but are not limited to: 6TiSCH, CoAP, Web of Things standards. It is expected that the integration of IUTs (e.g. IoT devices) supporting such protocols will not require development of additional tools but mainly re-use of IUT APIs with most of the efforts devoted to use the provided Testing Tools APIs (northbound APIs);
  • Proposed test design should target activities currently ongoing in standardization communities including ETSI, IETF, ITU, IEEE and W3C;
  • Formal support of SDOs should be considered to increase impact of the proposal and its successful selection;

Proposals in this Category are expected to be maximum 20 pages in length.

Category C:

  • Proposed usability study should provide a clear description of the followed methodology, including management of feedback and reporting of outcomes to the F-Interop project consortium, in order to influence further development and improvement of the F-Interop core platform.
  • Proposed usability study should focus on understanding the simplicity of use and effectiveness of the proposed tools in replicating the experience of physical tests session, while simplifying access to conformance, interoperability and performance online tests;
  • Proposal in this categories are expected to be short with particular focus on the impact and the implementation session, where an overview of the followed testing methodology should be highlighted;
  • Applicants should provide evidence of their previous experience in participating to ETSI plugfests (CoAP, 6itsch, etc), WoT plugfests, or of their involvement in performance testing activities;

Proposals in this Category are expected to be maximum 10 pages in length.

Category D:

  • Organized online plugfest should target communities including but not limited ITU, IETF, W3C, ETSI, IEEE as well as providing detailed information on the organization and explanation on how tools available in the F-Interop platform will be leveraged and made accessible;
  • Formal support and involvement of SDOs should be considered to increase impact of the proposal and its successful selection;
  • Proposal in these categories are expected to be short with particular focus on the impact, in terms of community reached and justification of cost savings for participants with respects to physical plugfest events.

Proposals in this Category are expected to be maximum 10 pages in length.


Do applicants need to fall under only in one category?

A reference category should be selected in the application phase. This will determine the maximum amount of money that can be requested. If more than one category applies, a second one can be selected in the application template. However combining funding available for two categories in one application is not possible. E.g., if the selected primary category is A (100K max) and secondary category is B (60K max) proposal cannot request 160K, but applicants are expected to do the work with max 100K EUR. However it is sensible that applications should focus only one category per proposal to keep the proposed work focussed and the expected outcome achievable. Multiple submissions in different categories are therefore allowed.


What is the suitable number of partners per proposal?

Proposal can be submitted by a single entity. However consortia formed by more than one party are allowed. Ideal number of third parties should be between 1 or 3 for Category A and B and 1 for Category C and D.


What are eligible third parties for F-Interop open call?

To avoid conflicts of interest, applications will not be accepted from persons or organisations who are partners in the F-Interop consortium or who are formally linked in any way to partners of the consortium. All applicants will be required to declare that they know of no such potential conflicts of interest that should prevent them from applying.

Third parties receiving F-Interop open call financial support will not become party to the F-Interop Grant Agreement and will therefore not need a PIC.
The F-Interop Grant Agreement will not need to be amended to include the selected beneficiaries.


Are large corporates eligible to submit proposals for the F-Inteorp open call? If so, in which category?

Larger organization and corporates are eligible if they fall into the following categories: European secondary and higher education establishments, research institutes and other not-for-profit research organisations; standards bodies. This meaning that large corporates could join whether they have a separate legal entity qualifying as research organization.


Is there any more information about current development of F-Interop platform?

Accepted publications describing current features of the F-Interop platform are available and will be updated here.

A list of public deliverable is available here
and related documents will uploaded as soon as available. In particular D1.3 provides the initial F-Interop platform architecture which should provides enough guidelines on how new tools could be possibly integrated.

Additionally the following link provide a first version of the F-Interop APIs (NB: final and stable version of such APIs will be released by April 2017)
Note that pre-requisite for a full understanding of the APIs and architecture described in this online document, might require you to familiarise first with the content of D1.3. Initial Architecture Design.


Is there any project category seeking proposals for developing a business model for the F-interop platform?

No, but a business model could be defined and added as part of the application process for proposal in Category A and B, in Section 2, Impact.


Can you clarify better the difference between Category A and B?

Category A will require to integrate new protocols, currently not supported, and consequently develop new tools to support the whole testing lifecycle (test design, test execution and test analysis), this including tools for testing with protocols that extend the stack of protocols already supported (e.g. adding test tools for 6LoWPAN, RPL, etc.). Other examples of proposed work include to develop complete new functionalities, currently not present in the F-Interop platform. Category B will require to develop and perform new tests for existing and already supported protocols. This might require the integration of new devices in the southbound of the F-Interop Core Platform using the IUT APIs.


Is there a list of protocols and test cases currently supported by the F-Interop platform?

Yes. Currently F-Interop integrate the following protocols 6TiSCH,CoAP for which a full testing lifecycle is provided.

A list of available test description is available here:


When applyling for Category A, will I be able to extend/re-use some of your code for implementing the testing tool?

Yes, we allow it and we are all in favour of code re-use.

When a contributor (through the open call or a benevolent one) develops a new testing tool for a new protocol he/she can either implement their own components from scratch (by just being compliant with the API-spec) or he/she can extend one of the existing testing tools components (e.g. test analysis component + test coordinator component) for the new protocols which may be a more attractive option for both: the contributor (code re-use) and for us (homogeneity of components and more testing/debugging of existing components).

Both are possible. Which one is more adapted has to be studied case by case, and may depend in the type of testing / protocol to be developed.


Where do I find the source code of the testing tools already implemented by F-Interop?

For the time being only the CoAP interoperability testing tool is publicly available from the F-Interop's the testing tools.
Also, we published the first version of the agent component (for info about the agent:
For future reference, you can find the complete list of F-Interop public git repositories at:


  • These tools are work-in-progress.
  • Developers of CoAP testing tool will provide in the near future test analysis scripts for Base CoAP, Block, Observe and Link format, which makes them unavailable for applications on CAT.B


Do I need to identify an internal F-Interop partner I will work with if the proposal is accepted?

For proposal in Category A, B and D, in case you are able to identify a suitable internal partners that could support you during the project, if accepted, depending on your objectives and target protocols please state so in your application template, otherwise we will do our best to assign you the most suitable one.


Can you tell me more about the evaluation criteria?

The standard evaluation form will consider the criteria for assessment of EU proposal, including Excellence, Impact and Implementation. Please refers to available templates. The table below provides a summary of the weights associated to each sub-criteria reviewer will consider for proposal evaluation.

Table 2: Proposals selection criteria

The final proposals’ selection will take into account complementary criteria such as geographic distribution among the selected projects within each category, and potential impact for standardization communities. The F-Interop consortium and assessors’ panel will welcome submission including partners from Eastern Europe.

This is emphasized by some of the above sub-criteria. In particular, explanation of different sub-criteria is provided below.

  • "Innovative dimension" and "Expertise of the applicant" aims to emphasize what assessors will evaluate in the Excellence and Impact section of the proposal. The "Innovative dimension" for "Test design" is not considered as in this category consortium doesn’t expect proposers to develop new features to the platform but instead to extend the existing tests library.
  • "Potential number of users/participants" it is not related to the number of third party expected to participate to a given proposal consortium but rather, as this is a weighted criterion, it means how important is that the tools/test/plugfest created are addressing the need of one or more given existing community. E.g., for new tools it is important that the tools developed satisfied the needs of a given number of potential users of the platform, benefitting them, not only a niche. This is important also for category, B, test design and a bit less but still important for proposal focusing on plugfest organization and delivery. Conversely, this is not relevant for Category C, SME report, as a report should be generated for the F-Interop platform as a whole and considering the functionalities the platform provides.
  • "Relevance for SMEs" is not relevant for SMEs report category because this proposals should aim to evaluate the usability of what the platform offers, not the value that can deliver to specific stakeholders.
  • "Turnover of the applicant" aims to stress that SMEs report proposals are welcomed by established businesses, including scale up SMEs, e.g., those business showing a growth of 20% in turnover or headcount in the last 3 years. This will help to strengthen the authority of the provided report.
  • "Geographic location", as highlighted in the FAQ help to re-iterate that well balanced consortium, including Eastern EU countries are welcomed in the proposal implementation section.
  • "Impact for standardization communities" falls into "Impact" section of the proposal template, but it helps to emphasize the fact that proposers should focus on protocols currently under standardization, rather than on new protocol currently growing only within academic environment.

Visit the website