AI and Machine Learning delivered by Grip

AI and Machine Learning delivered by Grip
Abi Cannons

Abi Cannons

Senior Product Marketing Manager

In this article, we will explain how Grip uses Artificial Intelligence (AI) and machine learning technology to better connect participants to one another, services and products. You will also find some top tips on getting the best from Grip’s AI-powered tools to improve your attendee, exhibitor and sponsor experience at your events. 



Grip has the world’s leading AI-powered matchmaking and networking platform; many years of hard work and research has been put into this complex and constantly evolving recommendation system.

At Grip, our approach is to make better recommendations based on a participant's preferences (what kind of people they want to meet, or the types of things they’re interested in.) Matching interests of attendees with suppliers that can deliver on those interests is a simple way to make accurate recommendations. We refer to this as preference matching

So, let's explain the difference between ‘explicit’ andimplicit’ preferences.

Explicit preferences are preferences that a participant has told our AI about. To discover these preferences, some organisers will ask their attendees questions on which types of products or services they’re looking to find at the event, through registration forms or onboarding questions. 

If the organisers don't ask their participants to provide their preferences, or the participant simply hadn’t told them their interests, Grip's AI would not have access to this information, making those preferences implicit (i.e. a preference the participant always had, but has not told us about).

To identify implicit preferences Grip AI has to learn what participants are interested in, using ‘user actions’. So, as a starting point, the Grip recommendation system combines the implicit preferences it learns with the explicit preferences provided by participants.


What data does Grip’s AI use? 

Grip’s AI uses a wide variety of data to make smart recommendations, including:

  1. Text (e.g. description data)
  2. User searches
  3. Interactions
  4. Similar users
  5. Exploration

This data, or to be more specific ‘metadata’, stores multiple facts about a participant, such as the answers to questions that you ask your attendees. 

The reason for using this plethora of data is to try and make the best recommendations given all the knowledge at that moment in time. As an event progresses, we gain more interaction data and a better understanding of each participant.

  1. Text (e.g. description data)
    Grip’s AI looks at free text data, such as headlines and summaries to make recommendations.  Natural Language Processing (NLP) techniques are used to better understand participants’ preferences.
  2. User searches
    The searches a participant makes in the Grip platform are also used by the AI. Searching for participants is good way for participants to find people or products of interest. This data is used by our AI and it also increases interactions, resulting in better recommendations. 
  3. Interactions
    The AI reviews participant interactions on the platform to learn attendees’ preferences. Even a few minor interactions can help the AI a lot, thanks to the way it uses them.
    Therefore, it’s important to encourage attendees to interact with the platform as much as possible, particularly with answering recommendations (with a ‘yes’ or ‘no’),  manual matches or requesting meetings. This leads to improved recommendations for participants. 
  4. Similar users
    Even when we don’t have any interaction or preference data, we can use preferences from similar participants to understand what someone may be interested in. 
  5. Exploration
    We also use algorithms to explore what interests participants have. This means some of our recommendations may not seem intuitive. However these are likely the AI trying different things to see if you may be more interested in those interests. This is a deliberate method to avoid the recommendations equivalent of ‘filter bubbles’ (i.e. the AI learning a specific interest and not showing you people matching other valid interests of yours.) 

Read more about our AI-powered matchmaking technology here.

Top tips:

Best questions to ask:

The ideal questions vary depending on 

  • Type of event (e.g. networking, tradeshow etc)
  • Demographics of exhibitors and attendees
  • Industry

The best questions are those which reveal the reasons why your attendees want to meet others.  This requires you to identify the questions and answers that matter most. These are generally the best questions for the AI as well, since they can be used for matching and for learning. 

There are three types of questions our system uses:

  1. Supply and demand (e.g. what products do I supply vs what products am I interested in)
  2. Self-matching facts (e.g. topic of interest)
  3. Facts about individual or business (e.g. job role, industry or company size)

Grip’s AI is most effective when all three types of questions are used, though it is far from necessary. However, it’s important to ensure the answers match exactly. For example, a supply answer (e.g. I provide recruitment services) needs to exactly match the demand answer (e.g. I want recruitment services).

If the system doesn’t have the above information, it will learn over time based on the participants’ interactions in the app. However, as it takes time, ideally the question types should be completed before the event. Assuming the questions are answered reasonably honestly, this gives the AI a better starting point. 

It’s worth noting that matchmaking performance is doubled (on average) by ensuring both a ‘supply/demand’ question AND a ‘fact-based’ question  are included (either self-matching fact or fact about individual or business). To ensure event organisers provide their participants with a great participant experience, where they’re not spending time completing endless questions, participants can be asked different questions, based on their profiles. 


Best practices on numbers of questions/answers:

The solution to the problem of having too much data is to limit your questions/answers, if possible. Of course, the ideal amount varies, depending on a number of factors, including 

  • Industry
  • Specialism
  • Question class (fact, matching fact, supply and demand) 

We typically find around 5 questions with 5-10 answers being a good rule of thumb. 

With matching-fact or supply and demand questions, because of the advantage provided by matching them, you can potentially add a few more questions & answers to get to the ideal number - perhaps up to 10 questions. 

It’s important to use as much industry knowledge in their construction as possible, however sometimes organisers can get lost in the details - rather than seeing where they can simplify.

Another good practice we sometimes use when there are too many answers to a specific question is to see if we can split the question and answers into multiple question/answer pairs (typically 2).  

Example of bad question/answer set:

  • Original question: What do you want from this event?
  • Original answers: event matchmaking, registration providers, analytics, badge scanning, wi-fi providers, hybrid event-tech, entertainment companies, caterers, bouncers, venues, advertisers, marketers, banner makers, print services

Example of good question/answer set:

Question 1: What event technologies are you interested in?

Answers: event matchmaking, registration providers, analytics, badge scanning, wi-fil, hybrid event-tech

Question 2: What event services are you interested in?

Answers:  entertainment companies, caterers, bouncers, venues, advertisers, marketers, banner makers, print services

Answers to avoid:

A common answer type which doesn’t help Grip’s AI is ‘all of the above’. There are a few reasons this is not helpful, such as 

  • Difficulties in parsing the answer (there can be many ways of phrasing ‘all of the above’).
  • It is not actually informative as all answers are selected
  • It is too easy to select (participants tend to select it because they are ‘not not interested’ in a value - rather than highlight the subset they are really interested in)

We recommend against answers like this, and prefer participants to select all from a list, if needed.  Using ‘all of the above’ answers encourages participants to use it, rather than providing specific answers.

The converse of ‘all of the above’ is ‘other’. This type of answer is slightly more useful, but is still not an answer that should be encouraged. Where possible answers should include options to broadly cover all possibilities without resorting to the equivalent of ‘other’, potentially by making some answers more broad than specific.


We hope this article has been helpful for you. If you have any additional questions about AI, recommendations or machine learning through the Grip platform, our data scientists will be happy to help.