Of the 123 conferences identified in the “Identification and classification of conferences” section, seven were rejected for further analysis. Three of these seven were rejected because, although physicists were invited to participate, later examination showed that the focus of the conference lay outside the classification scheme being used. (Two conferences had their focus on scientific computing, the other solely on chemistry). A further three conferences were rejected because it proved impossible to obtain more details about them and thus classify them with confidence. These three conferences were all based in China. No Twitter activity was associated with them, but whether this was due to firewall restrictions or simple lack of use of the service is not known. A final conference was rejected for perhaps more interesting reasons. A conference devoted to Hawking radiation attracted 32 participants but during the event 139 different Twitter accounts used the conference hashtag. Further investigation showed that four of the 32 participants possess Twitter accounts, but none of them tweeted using the conference hashtag (although one participant did tweet from the conference without using the hashtag). Upon inspection of tweet content and associated metadata it became clear that the conference hashtag was being used by people around the world, not necessarily scientists, who were interested in a public lecture given by Professor Stephen Hawking. Public outreach is certainly a significant aspect to Twitter use and, as discussed in the concluding section, the questions this raise are certainly worthy of further study. However, the use of Twitter at this particular scientific meeting was different in kind to other conferences in the study and it was therefore rejected for further analysis. The “Appendix” section contains the titles of the 116 conferences that were analysed for Twitter activity.
No attempt was made before taking data to identify equal numbers of conferences across each of the various subject areas. Table 2 gives the number of conferences in each classification and, as can be seen, 21 conferences were classified as “Astro/Particle” and 95 conferences were classified as “Other”.
Number of conferences in each classification group
The 116 conferences selected for analysis covered a wide geographical spread, with venues situated in 34 different countries. Broadly, these were split into those taking place in UK/Europe (80 events), Asia/Australasia (19 events), US/Canada (14 events) and Latin America (3 events). The title of many conferences explicitly expressed an international flavour to the event (see “Appendix” section) but in all cases the language of the conference website was English.
Note was also taken of conference duration and the number of participants. The intention behind this was to obtain a better measure of Twitter activity than a simple count of the number of tweets: a 1000-delegate conference of five days’ duration has more available “space” for Twitter activity than a 50-delegate workshop lasting 2 days. Clearly, conference duration is a straightforward matter of record. The figures for participant numbers, however, should be treated with a degree of caution. In many cases, the conference website or other publically accessible channel published a list of names and affiliations of those registered to attend the conference; this gave a precise number of participants, and the number was checked both during and after the conference. Post-conference figures were used when available. In about the same number of cases, further research was required in order to ascertain participant numbers; the necessary information was available in post-conference reports, learned society publications, open source event-organising sites such as Indico, and so on. In a small number of cases, conference organisers were approached directly and asked for participant numbers. However, although precise figures for registered participants can be obtained, these are not necessarily completely accurate. For example, it is possible that some people found themselves unable to attend a conference but still appear on a list of delegates; others might have registered late and do not appear on such a list; yet others might have requested not to have their names appear on a public list. Furthermore, not all participants attend all available sessions of a conference. Nevertheless, although there is an inevitable uncertainty in the metric, the product of conference participants and conference duration does seem to provide a reasonable measure of the size of an event. Fortunately, as will be seen in “Results” section, an uncertainty in participant number as large as 10 % (which could be viewed as being unduly pessimistic) turns out not to affect the conclusions.
The conferences in this sample ranged in size from 1851 participants at one extreme to 40 participants at the other. An analysis showed no relationship between the number of participants and the level of twitter activity.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.