Public Perceptions on Application Areas and Adoption Challenges of AI in Urban Services

Artificial intelligence (AI) deployment is exceedingly relevant to local governments, for example, in planning and delivering urban services. AI adoption in urban services, however, is an understudied area, particularly because there is limited knowledge and hence a research gap on the public's perceptions-users/receivers of these services. This study aims to examine people’s behaviors and preferences regarding the most suited urban services for application of AI technology and the challenges for governments to adopt AI for urban service delivery. The methodological approach includes data collection through an online survey from Australia and Hong Kong and statistical analysis of the data through binary logistic regression modeling. The study finds that: (a) Attitudes toward AI applications and ease of use have significant effects on forming an opinion on AI; (b) initial thoughts regarding the meaning of AI have a significant impact on AI application areas and adoption challenges; (c) perception differences between the two countries in AI application areas are significant; and (d) perception differences between the two countries in government AI adoption challenges are minimal. The study consolidates our understanding of how the public perceives the application areas and adoption challenges of AI, particularly in urban services, which informs local authorities that deploy or plan to adopt AI in their urban services.

earlier events and afterwards redesign its functionality according to the data and resulting events [11]. One of the most widely used machine learning applications is targeting advertising campaigns and making personalized suggestions based on earlier behavior. They apply the principle where the algorithm learns from earlier cases and does not use predefined decision paths. It is the most suitable for an analysis that is based on discrete categories and continuous numerical variables [12].
Urban services generally aim to increase elements of "good" local governance defined by transparency and openness, reliability and trustworthiness, and inclusiveness [13]. The provision of traditional public services such as health and security (e.g., police and fire departments) are the traditional responsibilities of local governments and their organizations. Thus, cities are responsible for maintaining and developing their vicinities, and for this purpose, there are a significant number of applications where AI can improve the efficiency and the reliability of public/urban services [14,15]. For example, traffic control applications and big data analytics are essential in developing road infrastructure and smart mobility services. In the future, autonomous vehicles will likely be in constant data exchange with the road infrastructure and traffic monitoring technologies, selecting optimal routes for each trip [16].
Our study focuses on public opinions of AI integration into urban service provision structure to contribute to the efforts in bridging the knowledge and research gap on the topic. It aims to explain people's behaviors and preferences regarding the most suited urban services for the future application of AI technology and the critical challenges for governments to adopt AI for urban service delivery. The study applies an extensive survey for data collection from Australia and Hong Kong. It presents an extensive array of questions and their answers in relation to the prospects of AI applications and currently considered challenges and obstacles that AI-based service development is likely to encounter from the user's point-of-view. The survey captures public perceptions of AI and related beliefs. The questionnaire includes claims related to public (local government) funding on AI, skills, and technical capabilities to adapt AI solutions in urban service palettes, as well as issues of transparency and trust in digitalization and AI in society.

2-Literature Background: Societal Issues and AI in Urban Services
AI is one of the most disruptive technologies of our time with many powerful applications that cities around the globe have started to take advantage of [17]. According to Yigitcanlar et al. [18], "in the context of cities, AI is the engine of automated algorithmic decisions that generate various efficiencies in the complex and complicated local government services and operations. Managing city assets with structural health monitoring, energy infrastructure fault detection and diagnosis, accessible customer service with chatbots, and automated transportation with autonomous shuttle busses are among the many examples of how AI is being utilized in the local government context." Some of the leading cities in actively investing and exploring the capabilities of AI include, but not limited to, San Francisco, London, Montreal, Tel Aviv, Singapore, New York, Beijing, Bangalore, Paris, and Berlin [19].
Technology adoption, including AI, entails technological and societal challenges on which earlier research has identified several determinants [20]. The main technical features include operational reliability; fluent customer feedback between end-user and service or technology provider; and an easy-to-use end-user interface [21]. Thus, data security and provision responsibility are the foundation for e-service implementation, including AI [22]. Most of the contemporary urban e-services are essentially exchanges of information, forms and agreements, and their digitalization process is, in principle, straightforward. However, due to the common lack of collective designing of public sector information technology system architectures, the task has proven to be more difficult in practice than anticipated [23,24].
In terms of societal issues, local governments provide majority of the public services in cities. Hence, understanding user/public opinions regarding the limited local government financial resources and investment capabilities are important [25]. This way, it is possible to construct a general view of the beliefs and attitudes that the respondents have on the public sector possibilities to support AI in their e-service development [26,27]. Local government collaboration is one potential means to increase investment capabilities. In general, collaboration modes include different forms of partnerships (e.g., public-private-partnerships) and have become one of the main forms of developing service provision.
Similarly, information and communication technology (ICT) user studies have highlighted the importance of the endusers' technology knowledge and skillsets [28]. To handle the extensive field of AI applications and their adoption challenges, we have identified the following topics (Table 1) in the literature [9, [29][30][31][32]. These identified broad societal categories are linked to empirically investigated application areas and challenges in this paper. These categories are elaborated on below.

2-1-Control Systems and Security (SC1)
Table 1 starts from the assumption (depicted as SC1) that transparency and service quality are essential to the delivery of public sector services. These are questions of control and monitoring that are fundamentally intertwined with cybersecurity and trustworthy governance [26]. Machine learning and AI contain trust issues; therefore, they should also be looked at from the risk and problem perspective. McGraw et al. [33] provided an extensive taxonomy of machine learning, and AI risks that current system architectures face. This taxonomy is the basis for constructing questions related to the challenges of this paper's empirical part. These risks include, but are not limited to, data manipulation and falsification, data confidentiality and trustworthiness, and transfer risks. The risk aspect is operationalized through ethical considerations of the respondents in the survey. Regulation plays a significant role here as the legislation related to decision-making and responsibilities, particularly concerning public officials when executing decisions, becomes blurred if AI decision-making is involved [34].

2-2-Digital Divides, Social Structuring and Community (SC2)
AI brings a new view on the debates concerning digital divides (SC2) [35][36][37]. The public perception of the digital divide is an important study subject because the AI concept is not entirely rooted thus its meaning varies when discussed in different contexts. AI also involves highly complex technological development processes, and the amount of people having in-depth knowledge regarding AI in technological terms is minimal. Thus, survey studies on the new and emerging technologies provide a general view of respondents' beliefs and attitudes [38][39][40][41]. These are also questions of social coherence and equality in a community [42,43]. As the computerized systems become more complex, it is likely that social differentiation between those who can make use, or even understand, the systems put them in a better position than disadvantaged groups. As the common technology sphere has become ubiquitous (e.g., internet-of-things), several traditional explanative variables have lost their explanative power (e.g., education and income). The differences and significances are most prominent in work and content-related issues. For example, highly educated people tend to appreciate news and fact related services and information sources more than other educational groups. However, general questions such as the total time of daily technology use do not necessarily differentiate between the education or income groups. The future topics consider short and long impacts that AI might have. In this regard, AI development is one of the current megatrends that create societal differentiation and digital divides [44].

2-3-Economy and Business (SC3)
In the case of the economy and management (SC3), the company size and investment power are significant factors in moulding public perceptions. Global technology giants (e.g., Alphabet, Meta) are the most often associated with AI development through customer profiling [45]. Questions of trust and cybersecurity are most often associated with them [46]. Therefore, it is necessary to look at the actual AI services people commonly use [47]. In practice, urban e-services create immaterial (services) and material (product) flows. These include urban data management, intelligent transport systems and services (e.g., traffic optimization and product deliveries). Reliability and trustworthiness are the most significant attributes defining the successfulness of the service in question, and in critical societal domains such as social security and health technologies, the trust requirements are decisive [48]. In general, urban e-services on which AI has the most substantial impact that varies according to the particular provision logics, through private service provision models, public in-house production models and their combinations, i.e., public-private partnerships. Sectoral cooperation has been an important method of improving quality in online environments [49].

2-4-Information Society and Know-how (SC4)
Technologies have a significant impact on the future of information society, work, and education-based know-how (SC4). Education is essential here, as highly educated people tend to recognize and are more equipped for critical assessment of e.g., end-user agreements required by most of the e-services and applications. In the case of future of work, AI enhances productivity and innovation to reduce costs and increase efficiency. Commonly we can separate between simple automation tasks (machines substitute human work) and higher-level AI applications (e.g., automated decision-making) [50]. This connects machine learning algorithms to the adjustment of the decisions (each decision affects the future decisions made by the AI). It also minimizes errors and mistakes in repetitive tasks, thus decreasing human errors. The efficiency claim also includes the idea that dangerous and hazardous tasks can be transferred to robots increasing the safety levels in relevant occupations [51]. There is also another side, these issues may be looked at from the point-view of disadvantages. These include considerations of the cost and benefit, as in some cases, technology implementation costs (e.g., in robotics) may be too high for a feasible return of investment [52].

2-5-Sustainability, Wellbeing, and Health (SC5)
The final category (SC5) brings up important elements of sustainability (clean environment), health and wellbeing. Life quality issues are the most often associated with e-health technologies and remote healthcare. AI-based health services possess significant future potential, particularly for the commonly diagnosed health problems where the existing data volume is high enough to enable highly accurate diagnosing. The adoption and use of automated or AI-based health consulting or medical prescriptions are based on trust and minimize the possibility of an error. As such, earlier research has proven that public attitudes towards technology (in general) are dependent on respondent's age, education, and occupation [53][54][55]. Health e-services are not an exception. Identified factors also entail respondents' attitudes towards sustainability issues and appreciating cleaner production and sustainable consumption in urban spaces.
The presented complex mix of SCs is the foundation of the empirical section detailing the current socioeconomic structuring of AI perceptions (see Section 4). While there are some studies on what urban managers think on the prospects and constraints of AI in the context of local government services [18], there is a knowledge gap in understanding what the public thinks about AI adoption in urban services-especially considering the societal differentiation and digital divides. This study focuses on the socioeconomic variables (see Table 2) and locations (Australia and Hong Kong) to bridge this gap. We use two main domains of looking at public opinions: first, in terms of future applications and their impact on daily life, and second, in terms of the obstacles and challenges that AI may entail (see variables in Table 3).
The most beneficial function of AI technology is: To automate data collection, management, and analysis/To complete tasks otherwise requiring human input/To learn, evolve and improve decision-making over time/To monitor the environment, sense changes and adjust decision-making accordingly/Other The most promising about the future use of AI is to: Enhance productivity and innovation/Reduce costs and increase resources/Reduce error and mistakes/Increase free time for humans to complete other tasks/Improve safety by completing dangerous tasks for humans/Improve functionality of basic services/Optimise energy consumption and production/Assist in the development for change and potential risks/Reduce crime and monitor illegal behaviours/Aid in disaster/Emergency prediction, management, planning, and operations/Provide support to citizens in need/other The biggest disadvantage of AI technology: AI will be highly costly/AI could make many tasks completed by humans obsolete/AI is only as good as the programming and data that is an input/There is a risk that some will abuse AI for their benefit/Other Future consideration of AI is a challenging topic for a survey study. However, it is likely that traditional (e.g., education and age) explanative variables of social sciences become strongly visible. Empirically, a large set of questions are focused on beliefs and opinions concerning AI and local government. The paper applies aspects of technological possibilities, economic feasibility, and social consequences entailed by AI development to systematize the approach. For example, in the AI future application section, questions related to inclusion are important indicators in assessing societal impacts. The empirical data concerns both positives (opportunities) and negatives (challenges). This enables numerous comparative alternatives for analysis and interpretation of the content questions. Due to the complexity of the presented theme and the blurred and varying use of the term AI, most of the questions are asked in dichotomous form (yes/no). These methodological decisions are detailed in the following section.

3-1-Data Collection
The study adopted a case study approach to investigate public perceptions of AI in the context of urban services. The study selected two case studies-i.e., Australia and Hong Kong. Following an ethics approval (#2000000257) granted by Queensland University of Technology's Human Research Ethics Committee, an online survey was developed to collect data from the public in Australia (Sydney, Melbourne, Brisbane) and Hong Kong. The questionnaire was developed using the key AI and public perceptions literatures. The survey focused on capturing participants' responses to explain people's behaviour and preference regarding the most suited urban services for the future application of AI technology and key challenges for local governments to adopt AI for local service delivery.
An online enterprise survey platform-i.e., Key Survey-was utilized to conduct the survey. The minimum number of participants (384 at confidence level 95% and margin of error 5%) was determined based on methods suggested in the literature [56]. Only adults (people over 18) were invited to participate. The survey was open between November 2020 and March 2021. A professional survey panel company and social media channels were used to recruit participants. In total, 851 valid responses were received (about 23% response rate). The socioeconomic characteristics of the sample are given in Table 2. Table 2 shows descriptive statistics for geographic variables inside this study. Some of the independent categorical variables have a small sample size. For example, the survey includes 851 participants, but only five from gender "other" and 24 participants over 85. This survey includes respondents from Australia (n=604) and Hong Kong (n=247).
Participants were asked to select all that apply to answer two questions related to AI services. These questions were: "Which of the following urban services are most suited for the future application of AI technology" and "Which of the following are the key challenges for local governments to adopt AI for local service delivery?". The answers to these questions were converted to 0 or 1. As mentioned previously, our objective is to gain knowledge about the participants' behaviour and get insight into public perceptions concerning the use of AI in local government and urban services. Variables used in our survey are listed in Table 3.

3-2-Research Method
Participants were given various alternatives from which to choose, and they could answer questions by selecting 'all that apply'. Data and answers were analysed using descriptive statistics. One way to analyse discrete variables is by applying a binary logistic regression model for the dependent variables after converting it to dummy variables, which takes values (0 or 1). The following equation shows a logistic regression model: where ( ) represents the odd ratio, { 0 , … , } are the model parameters, and { 1 , … , } denote the independent variables. We apply a stepwise regression approach to find the best candidate for the logistic regression model.
The overall process of the research methodology is illustrated as a research flowchart in Figure 1.

4-Analysis and Results
The study examines critical areas of public understanding, optimism, and concern on the societal application of AI technologies, based on a representative public opinion survey of Australians and Hongkongers. To explore respondents' views regarding the application and challenges of AI for the 'social good', the participants were asked to answer two questions. First, participants were asked, "Which of the following urban services are most suited for the future application of AI technology?". Their responses were then used to assist us in better understanding public opinions on AI's potential applications for social good. During the survey process, participants were instructed to choose all appropriate options from the following list: The second survey question was: "Which of the following are the key challenges for local governments to adopt AI for local service delivery?". The purpose of this question is to explore respondents' opinions about potential challenges in adopting AI technology in public services. The following is a list of possible responses to this question:  Limited local government financial resources and investment capabilities for AI projects;  Limited project coordination for the AI implementation between other neighbouring local governments and the state government;  Limited technical local government staff and know-how on AI projects;  Limited interest in AI-based services from the local community;  Limited trust of the local community to the AI technology;  Lack of transparency and community engagement of the AI-based decisions;  Heavy dependency on the AI technology companies/consultant for project/service delivery;  Ethical concerns on AI of the local community;  Lack of regulations on the AI utilization in the local government context;  Lack of clarity on if/how AI will be used for the common/social good of all community members;  Lack of clarity on how digital divide and technology disruption on the disadvantaged communities will be addressed;  Limited human oversight over AI decisions concerning the local community.
The following sections discuss major trends identified from public perceptions in Australia and Hong Kong for applying AI in urban services and associated challenges among five societal categories (SC). Each SC encapsulates a grouping of application and challenge areas to reflect the broad spectrum of urban services (see Section 2). The findings of the binary logistic regression model are reported in this section. The results are presented in tables, which only include descriptions of model coefficients that are statistically significant (p-value 0.05). Table 3 serves as a reference table detailing variables and their definitions. The coefficient estimate, standard error, odds ratio, and probability are reported as well as the corresponding p-value.

4-1-Applications and Challenges in Adopting AI in Control Systems and Security (SC1)
Section 4.1.1. contains the application area's results "Aged-care and disability". Once these results have been given as an example, they are separated into their section to provide a more extensive explanation to assist the reader in better comprehending the analysis. Section 4.1.2. provides a general discussion of the participants' responses for all other application areas that belong to the SC1, which comprises: Animal rescue and control (Table A1); Crime and security (Table A4); Disaster/emergency prediction and management (Table A6); Pandemic monitoring and control (Table A15), and; Urban development control and monitoring (Table A17). The corresponding statistical models are presented in included in Appendix A.
Aside from the various conceivable applications of AI technology in public services, challenges of AI adoption are also foreseen. As a result, the following sections are directed to an analysis of the survey results that disclose interesting tendencies and a brief discussion of public concerns about the potential drawbacks of AI. Globally, there is still a lack of knowledge on how to harness the potential of AI and assure sustainability, justice, management of information asymmetry, and failure risk in these environments. The opinion survey conducted among Australian and Hong Kong citizens indicates differences in how they perceive and identify the significant challenges when implementing AI technology. Section 4.1.3. and subsections discuss in detail responses regarding "Limited local government financial resources and investment capabilities for AI projects" as an example, while Section 4.1.4. offers a general discussion of the participant's responses to the other SC1 challenges. SC1 comprises two challenging areas: Lack of regulations on AI utilization in the local government context (Table B8), and; Limited human oversight over AI decisions concerning the local community (Table B11). The corresponding statistical models for these challenging areas are presented in Appendix B. Table 4 summarizes some of the findings from Hong Kong and Australia on the use of AI in aging and disability care. Table 4 contains the results for Australia and Hong Kong. Appendix A includes the results for the other potential applications areas of AI for social good. Table 4 reveals that Australians and Hongkongers have very different views regarding the application of AI for aged-care and disability. This result is expected and intuitive considering the sociodemographic and economic differences in the two countries. The only thing in common is their negative view regarding the application of AI to improve the functionality of essential services (e.g., healthcare, education). Further info on both country contexts is provided below.

The Australian Context
For the age ranges 55-65 (AGE5) and 75-84 (AGE7), the coefficients of 1.66 and 1.69, respectively, show that these respondents favour using AI for aged care and disability. According to the results, there is a high probability (84%) that people in these age categories believe that AI can be employed in aged-care and disability. Many older adults value independence and choose to live in their own homes with proper support rather than entering institutional care. Remote monitoring technology, such as video cameras that monitor people's actions at home can help seniors live independently. Therefore, people in these age groups who are most likely to benefit from the potential application of AI in elderly care and disability are more likely to consider how it can help them and their relatives.
The use of AI in aged care and disabilities is supported by participants who generally identify AI with machine learning (coefficient 0.63). With a 63% probability, people who have this attribute are more likely to believe that AI can be used in aged care and disability. This outcome might be explained by individuals who had prior exposure to machine learning are more likely to have had favourable experiences with AI. Another trend arising from our survey is that unemployed participants (EMP2) favour using AI for aged-care and disability with a coefficient of 1.07. There is a high probability (74%) that unemployed people believe that AI can be applied for aged-care and disability. Unemployed people are more inclined to adopt AI since they are less likely to be aware of the obstacles connected with AI implementation. Ironically, those who do not comprehend the fundamental ideas of AI (UND4) demonstrate the highest support for its application to aged-care and disability (1.95 and 88%, respectively). The increased acceptance of this group in adopting AI technology despite having no prior knowledge may be explained because they associate their lack of understanding of AI with other technologies, they are familiar with but do not comprehend.
The results showed that those who learnt about AI through university/courses (SOURCE10) and social media/internet (SOURCE8) are opposed to adopting AI for elderly and disabled care (coefficients -1.44 and -1.54, respectively). People with this background have the lowest probabilities (19% and 18%, respectively) to believe AI can be used in aged-care and disability. People in such categories may have a lower level of acceptance since they are aware of the considerable challenges connected with this application and other areas where AI is likely to be adopted first. Similarly, participants who see a promising use of AI to improve the functionality of essential services (e.g., healthcare, education) (PROMI6) and feel in general neutral about AI (FEEL2) disfavour the use of AI for aged care and disability with coefficients, -1. 26 and -0.78, respectively. Participants from these categories may have a low level of adoption because they see more potential in deploying AI technology in other areas, such as telemedicine. Nevertheless, respondents from those groups show a probability to believe the AI can be applied for age-care and disability of 22% and 31%, respectively.

The Hong Kong Context
Coefficient, -1.47, for GEN2, indicates that female participants disfavour the use of AI for aged-care and disability, with only 19% believing in this application area. In contrast, age group 25-34 (AGE2) and participants who associated AI with advanced predictive analytics (PRED) favour the application of AI for aged-care and disability with coefficients 2.75 and 2.81, respectively. This result is intuitive as young people who appreciate AI's potential for advanced predictive analytics are expected to be optimistic about AI and its many applications. People from these groups believe that AI can be applied in aged-care and disability with a probability of 94%.
Participants who feel unsure (PROSPECT2), neural (PROSPECT4), enthusiastic (PROSPECT5), and excited (PROSPECT6) about an AI future favour the use of AI in aged-care and disability. People from those groups have a high probability, 94-98%, of believing in the application of AI for aged-care and disability. Participants who see the most beneficial functions of AI technology to: (a) Learn, evolve and improve decision-making over time (BENEFIT3), and; (b) Monitor the environment, sense changes and adjust decision-making accordingly (BENEFIT4) support (coefficients 2.11 and 2.48, respectively) its' use for aged-care and disability. A high percentage of people from these groups, BENEFIT3 (89%) and BENEFIT4 (92%) believe that AI can be applied in aged-care and disability.
On the other hand, participants who first identified AI with humanoid robots (ROB) were less likely (coefficient -1.49) to support the use of AI in aged-care and disability. There is a low probability (18%) that people from these groups believe in this application area for AI. People who hold this view could assume that robots will be used to take care of people's needs rather than many other relevant ways AI can be used for this application. The results showed that those participants who anticipate a noticeable impact of AI on daily life in the next 5 to 10 years (IMPACT3) are disfavoured (coefficient -1.65) to use AI for aged-care and disability. People who hold this viewpoint revealed a low probability of 16% believing that AI can be used in aged-care and disability. Although people with this view agree with the likely impact of AI shortly, they do not see a potential use for this application.
Coefficient, -2.64, for SOCIETY2 indicates that participants who see no impact of AI in society also disfavour the use of AI for aged-care and disability. Only 7% of those respondents believe AI can help individuals who are elderly or disabled. In contrast, people who see AI as something negative for society (SOCIETY3) agree (coefficient 3.86) that it can be applied for aged care and disability. Even though their overall view of AI is negative, they still see some potential benefits. Individuals who share this viewpoint (98 %) believe that AI could be used for aged-care and disability. Another tendency appears to be a relationship between having no prior experience with AI (EXP3) and support (coefficient 3.16) for the use of AI for aged-care and disability. The results indicate a high possibility (96%) that these individuals feel AI can be used for aged-care and disability. Even though they have no experience with AI, they may be aware of its potential. Similarly, they could be making analogies with other technologies they do not understand but use regularly, such as the internet and cell phones.

4-1-2-Trends Regarding Other Applications Areas of Adopting AI in SC1
Many advanced techniques, including security systems and devices, employ AI algorithms to improve their capabilities. This section explored respondents' perspectives on prospective areas where AI technology might improve control system security (SC1). As shown in Tables A1, A4, A6, A15 and A17, in Australia, older generations demonstrated far higher levels of trust in using AI technology to regulate service/security, implying that they are comfortable utilizing current workforce technologies. In general, the younger respondents (under 25) appear to be more cautious of implementing this technology than more senior respondents, which might be because the more youthful population perceives new technology as more threatening than older responders. In Hong Kong, however, age groupings had little impact on participants' decisions to use AI in control and security systems, as they do in Australia. Hong Kong shows mixed views from unsure to enthusiastic, which correlates to little knowledge on the topic. In terms of demographic characteristics, there is no discernible trend. In other words, no apparent trends emerged in terms of respondents' age, gender, socioeconomic level, and so on. Table 5 shows the regression analysis results for "limited local government financial resources and investment capabilities for AI projects" as a critical challenge for the adoption of AI by local governments from Australia and Hong Kong. The first part in Table 5 includes results for Australia, while the second provides those for Hong Kong. The first paragraphs discuss the findings from assessing what challenges the government may face when implementing AI in public goods based on replies from Australian respondents, followed by an analysis of responses from Hong Kong respondents. In sum, our study revealed that Australians and Hongkongers' perceptions of AI are impacted inversely by their feelings and level of excitement about AI. In general, Australians appear more favourable than Hongkongers about their government's capacity to deploy AI, which might be due to Australia's lower financial demands and accompanying stress. Further info on both country contexts is provided below.

The Australian Context
Participants who may not feel comfortable living or work in a fully autonomous place (COMFORT2 and COMFORT3) think that financial resources are likely to be a key challenge (coefficients 1.08 and 0.92, respectively) for adopting AI local governments. There is a high probability (72-75 %) that people who hold these beliefs feel that financial resources can hamper AI adoption; this outcome is predictable and logical because they are already uncomfortable with automation, which is strongly connected with AI. In contrast, participants who have rarely or never interacted with AI applications (FREQ4 and FREQ5) do not think (coefficients -1.25 and -1.79, respectively) that financial resources can be a challenge for adopting AI by local governments. There is a low probability (14-24%) that people who have rarely or never interacted with AI believe that financial resources could be a challenge for local governments to adopt AI. This group's unfamiliarity with AI makes it difficult to understand the potential difficulties of adopting complicated technology. For example, those who have experience with smart speakers using AI technology are aware of how limited their capabilities currently are and how expensive and complex is likely to be the development of significantly superior technology.
The coefficient for feeling neutral about AI in general (FEEL2) is -1.02, suggesting that individuals with this profile are unlikely to consider "Limited local government financial resources and investment capacities for AI projects" as a significant barrier to local governments embracing AI. There is a low probability (27%) that people who feel neutral about AI believe that financial resources can negatively affect it. Individuals who share these beliefs are unlikely to have strong feelings towards AI, possibly due to a lack of exposure or awareness on the matter. Similarly, people who feel enthusiastic and optimistic about a future with AI (PROSPECT5 and PROSPECT7) are unlikely (coefficients -1.03 and -1.29, respectively) to see financial resources as a significant challenge for adopting AI by local governments. The probability that respondents with these characteristics believe that a lack of financial resources would hinder the adoption of AI by local governments is low (26-27 %), which is understandable given that they are already excited and optimistic about the technology.

The Hong Kong Context
The following are Hong Kong residents' perceptions on the most significant challenges the government will encounter in implementing AI technology in public services. The coefficient for the unemployed group (EMP2) is 4.21, indicating that participants belonging to this group are likely to see "Limited local government financial resources and investment capabilities for AI projects" as a key challenge for local governments. There is a high probability (99%) that unemployed people believe that financial resources can challenge the adoption of AI by local governments. This result is expected and intuitive given that unemployed people, especially those living in expensive places like Hong Kong, are likely to be aware of the importance of financial resources to develop and deploy the technology. In contrast, participants with a high income (INC6) do not see (coefficient -2.88) financial resources as challenging for local governments to deploy AI. There is a low probability (5%) that people with high income believe that financial resources can be a significant challenge for local governments to deploy AI. These results are consistent with the previous and intuitive as people with high incomes are less likely to see limitations faced by others regarding financial resources, including local governments.
Participants who learnt about AI using social media/internet (SOURCE6) view financial resources as a significant challenge for the deployment of AI by local governments. There is a high probability (98%) that people with this background believe that financial resources are a significant challenge for the implementation of AI. People who learned about AI through these channels will likely understand the expense and associated challenges when adopting AI technologies for the public good. Hence, they are likely to see financial resources as a significant issue for local governments. Another trend seems to link participants who believe that the most promising future applications of AI are connected to "reduce costs and increase resources (PROMI2)", "reduce error and mistakes (PROMI3)", and/or "increase free time for humans to complete other tasks (PROMI4)", and/or who see as "a risk that someone will abuse AI (DISAD4)", and/or who "feel neutral or negative about AI in general (FEEL2 and FEEL3)" in considering financial resources as a key challenge for implementing AI by local governments (coefficients 5.92, 6.27, 3.3, 3.14, 2.31, and 8.23, respectively). There is a high probability (91-100%) that people with these views find limited financial resources as a major barrier to AI deployment.
Respondents who link AI with humanoid robots (ROB) regard financial resources as a major barrier to local governments when deploying AI (coefficient -2.64). A small percentage (7%) of people with this view believe that limited financial resources can challenge the implementation of AI. In contrast, participants who first associate AI with a dystopian future controlled but computers (DYST) or who consider that AI's abilities are "solving problems using data and programmed reasoning", "learn from previous mistakes to inform future decision-making", and "analyze its environment and make decisions based on this analysis" believe that financial resources are likely to challenge (coefficients, 3.64, 9.02, 7.98 and 7, 83) the deployment of AI by local governments. There-is a high probability (97 -100%) that people with these views consider limited financial resources as a key challenge for the implementation of AI. That is, people who give significant value to the abilities of AI are likely to be aware of the corresponding cost behind its development and implementation.
As shown in Table 5, participants who view the most beneficial role of AI as "to monitor the environment, sense changes and adjust decision-making accordingly" (BENEFIT4) do not see (coefficient -4.45) financial resources as a key challenge for the implementation of AI by local government. There is a low probability (1%) that people with this view believe that the deployment of AI can be affected by limited financial resources. It could be that people with this view consider the benefits of AI so essential and high that financial resources are not considered. When looking at the outcomes related to participants who feel neutral about an AI future (PROSPECT4), it is noticed that individuals with this profile are unlikely (Coefficient -3.61) to consider financial resources as a significant challenge for AI implementation. In contrast, those who feel enthusiastic (PROSPECT5) about an AI future are highly likely (probability 93%) to see financial resources as a significant challenge for the deployment of AI by local governments.
People who may feel comfortable with a fully autonomous place to live or work (COMFORT2) or who have never interacted with AI (FREQ5) are likely to consider financial resources as a key challenge (coefficient 2.75 and 8.04, respectively) for the implementation of AI by local governments. Our survey results show a high probability (94-100%) that this sort of participant places a high value on financial resources for local government AI adoption. In contrast, the results showed that financial resources are not considered a significant challenge for implementing AI by local governments (coefficient -5.81) for those who think that society will not change because of AI (SOCIETY2).
As expected, Australians and Hongkongers have significantly different response behaviour regarding the role of limited local government financial resources and investment capabilities for AI projects. The only common factors were their "neutral feelings toward the AI in general (FEEL2)", "enthusiastic perspective towards the future of AI (PROSPECT5)", "limited interaction with previous applications of AI (FREQ5)", and "level of comfort with a fully autonomous place to live or work (COMFORT2)". COMFORT2 and FREQ5 had an equivalent impact on Australian and Hong Kong residents' perceptions of the importance of financial resources in the deployment of AI by local governments. That is, participants from Australia and Hong Kong who have never interacted with AI or who may feel comfortable with a fully automated place to work or live are unlikely to see limited financial resources are a major challenge for the deployment of AI. The limited experience of participants from those groups with the AI technology and potential bias regarding what is involved behind automation at home and work does not let them appreciate the potential role of financial resources.

4-1-4-Trends Regarding Other Challenging Areas of Adopting AI in SC1
According to our survey, Australian public opinion about the challenges in public sector adoption of AI in control systems and security is patterned by gender, with 61% of males associating the possible main challenges with a lack of regulations on AI usage in the local government context; and 67% who believe that limited human oversight over AI decisions affecting the local community will be the main challenge facing AI adoption (Tables B8 and B11). In contrast, participants who do not feel that the primary problems in implementing AI are related to control and security is represented by people who declared no understanding of the basic concepts of AI (UND4). In Hong Kong, there were no noticeable trends in terms of respondents' age, gender, or socioeconomic status.

4-2-Applications and Challenges of Adopting AI in Digital Divides, Social Structuring and Community (SC2)
SC2 has three application areas: Community support and engagement (Table A5); Housing and homelessness (Table  A13), and; Urban planning and development (Table A18). The analysis of the application areas of this social category focuses on understanding respondents' beliefs and attitudes towards highly complex technological development processes and on assessing social coherence and equity in Australia and Hong Kong. In Australia, aside from age once again having an essential impact on the participants' decision to use AI (with a more significant chance of supporting the use of AI with rising age), another notable tendency would be where the participants learned about AI. In general, participants reveal that they are opposed to employing AI for community support and engagement and housing and homelessness, regardless of the source from which the participant learned about AI (see Tables A5 and A13).
According to the Hong Kong analysis, participants with a promising vision for the future use of AI (PROMI4, PROMI7, PROMI10, and PROMI 11) shows a high probability (100%) to believe that one of the most suited urban services to apply AI technology is in community support and engagement. In contrast, participants who think that the AI's abilities are related to the ability to replicate and respond to human speech (ABS2) and the ability to solve problems using data and programmed reasoning (ABS3) shown the lowest probabilities (0%) to believe in the application of AI technology in this area (see Table A5).
When looking at the survey results for housing and homelessness in Hong Kong, for most of the socioeconomic variables used to characterise the sample, the coefficient was negative, suggesting that the participants tend not to believe in the future application of AI technology in this area (see Table A13). Advances in machine learning and AI techniques have enabled the application of learning algorithms from entertainment, commerce, healthcare to social problems, such as algorithms to inform policies that guide the delivery of homeless services. However, although many Hong Kong residents believe in the use of AI in community engagement, they do not appear to consider that this technology will be applied to housing and the homeless, which may imply that the application of AI in this area is unclear for the participants. It is also interesting to note that medium-high income Hong Kong residents are sceptical about the use of AI in housing and homelessness. This group's probability of believing that AI technology would be used in this area is merely 1%.
As AI systems progress, they will be able to make judgments without the need for human input. However, one possible concern is that, while executing the tasks they were intended to accomplish, AI systems may unwittingly make judgments inconsistent with their human users' values, such as physically injuring humans. In this survey, four challenging areas were considered in SC2, and these are: Limited project coordination for the AI implementation between other neighbouring locals (Table B1); Limited interest in AI-based services from the local community (Table B3); Limited trust of the local community to the AI technology (Table B4), and; Ethical concerns on AI of the local community (Table  B7).
The biggest problems connected with using AI in public services, according to Australians aged 55 to over 85, appear to be tied to the local community's limited faith in AI technology and ethical concerns about AI (see Tables B4 and B7). In Hong Kong, there are no clear trends in respondents' socioeconomic status for the areas that belong to SC2. However, the binary logistic regression model shows high proportions of 'strong belief' that the local community's lack of trust in the technology will be a potential challenge associated with AI adoption in public services. The highest percentages relate to participants' employment status (93%), source of AI learning (92% for SOURCE 6 and 90% for SOURCE 8) and those who consider autonomous automobiles (82%) and dystopian future (90%) when thinking what AI implies (see Table B3). Furthermore, 97% of participants in Hong Kong with a higher level of education (postgraduate degree) consider that one of the central concerns encountered with AI adoption is connected to ethical issues (see Table B7).

4-3-Applications and Challenges of Adopting AI in Economy and Business (SC3)
The capacity of AI to deal with massive amounts of incoming data is its primary advantage over humans. For example, to forecast future stock values, you may utilize data from the company's activities, reviews, news, Twitter mentions, and a variety of other sources. Four application areas connected to SC3 were explored in this study: Business development and assistance (Table A3); Economic development (Table A7); Infrastructure management (Table A11), and; Transport management (Table A16). Notably, when we examined the results for these areas (Tables A7, A11, and A16), we observed that their age once again influenced participants' decisions on using AI technology in urban services in Australia. The older generation's faith in the potential for AI to contribute to economic activity sectors may be due to their far higher confidence in themselves as employees, and most of them may not feel a robot could do jobs better than them. Therefore, this might explain why more senior respondents appear to be more convinced of the technology's deployment as compared to younger respondents. In Hong Kong, education level appears to influence participants' beliefs regarding the application of AI in business support and development, economic development, and transportation management. For example, participants who have completed Year 11 or equivalent (EDU2) agree that AI is unlikely to be used in the economy and industry, as demonstrated in Tables A3, A7 and A16. On the other hand, people in Hong Kong who know how AI may boost consumer involvement and help automate the most time-consuming tasks seem more likely to trust the application of AI in public goods (economic development and infrastructure management). This disparity can be explained by the fact that people with low levels of education may be unaware of the potential contribution that AI can make to the economy. In contrast, people with a higher understanding of AI technology are more likely to favour applying AI in economic and business areas.
Although the use of AI for economic purposes might be advantageous, the deployment of AI in business can face several challenges. Therefore, we considered the following challenging area for SC3: Heavy dependency of the AI technology companies/consultant for project/service delivery (Table B6). Respondents in both regions who are excited about AI adoption in public services do not feel that one of the concerns connected with employing this technology for economic objectives would be the high dependency created by using AI technology companies for projects/service delivery.
As shown in Appendix B, Table B6, in Australia, only 25% of respondents with exciting prospects related to AI indicated support that companies will heavily depend on AI technology, and only 11% expressed similar beliefs in Hong Kong. Presumably, because those participants are excited about using AI technology in public services, these individuals have a more optimistic perspective of how AI may assist the economy compared to the drawbacks it brings, such as the risk of companies becoming overly dependent on the technology. Another pattern would be that 96% of Hong Kong residents who believe AI will lead to a dystopian world believe corporations will depend heavily on AI technology for project/service delivery (Table B6). Artificial intelligence already has a significant impact on human economies and societies, and it will have an even more substantial impact in the future. As a result, it is projected that those who have a dystopic perspective on the future usage of AI will believe that there will be a lot of reliance on this technology.

4-4-Applications and Challenges of Adopting AI in the Information Society and Know-how (SC4)
The following three application areas make up SC4: Arts and culture (Table A2); Education (Table A8), and; Information and assistance (Table A10). In Australia, participants with postgraduate degrees believe AI will be used in arts and culture with an 82% probability of adoption. Participants in Hong Kong who admitted to utilizing AI technology on a regular basis (FREQ2), on the other hand, expressed lower levels of support about the use of AI to generate culture for popular consumption (28%). Because technology such as virtual reality and 3D printing are currently in use in both regions, the results may represent a difference in what AI entails between Australia and Hong Kong. For example, in the film business, AI has assisted animators in mapping face characteristics and motions to their characters. Aside from that, anyone with a computer may utilize software capable of generating films, altering images, or drawing graphics.
Another notable tendency that emerged in terms of the use of AI in education is that those with no degree and higher degree are more inclined to reject its use in this sector in Australia (Table A8). Surprisingly, respondents in Australia (81%) and Hong Kong (99%) with a medium-high income believe that AI should not be employed in education. Aside from that, no significant tendencies emerged. This outcome is most likely due to the misconception that the purpose of using AI in education is to replace teachers rather than to assist them in recognizing each student's potential and limits. This is especially evident in Hong Kong, where participants who see a dystopian future in which robots "take on jobs" and/or "take over the globe" are convinced that AI technology would be employed in education (probability of 95%).
AI is increasingly being utilized to make choices in the absence of people. Although AI can produce less biased sentencing and parole judgments than humans, algorithms trained on biased data may discriminate against specific groups. Furthermore, the AI employed in this application may lack transparency, such that human users do not comprehend what the algorithm is doing or why it makes conclusions in specific instances. In that regard, the following two challenging areas make up SC4: Limited technical local government staff and know-how on AI projects (Table B2), and; Lack of transparency and community engagement of the AI-based decisions (Table B5).
Participants from Australia (65%) and Hong Kong (91%), who believe that one of AI's fundamental abilities is "to solve problems using data and programmed reasoning" believe that the main challenges for society when implementing AI in public services are a lack of transparency and community engagement in AI-based judgments (Table B5). Concerns about fairness and transparency in applying AI in control systems and security may indicate that human users may not comprehend what an algorithm is doing. In other words, they do not understand the outcome of an AI model, which makes sense once it is often challenging to explain results from large, complex neural network-based systems. Besides that, curiously, respondents who reported often using technology such as chatbots and Google maps have varied beliefs depending on where they reside. People in Australia (78%) believes that a lack of transparency and community participation in AI-based judgments will be a major difficulty when implementing AI, but most Hong Kong residents (89%) do not believe this would be a major challenge (Table B5). Although regularly consuming distinct forms of AI technology, citizens of Australia and Hong Kong appear to have diverse concerns about AI implementation due to differing perceptions of what AI is.

4-5-Applications and Challenges of Adopting AI in Sustainability, Wellbeing, and Health (SC5)
By detecting energy emission reductions, CO2 removal, assisting in developing greener transportation networks, monitoring deforestation, and anticipating extreme weather events, AI can enhance global efforts to safeguard the environment and conserve natural resources. Furthermore, AI can help healthcare workers better understand the day-today habits and requirements of the individuals they care for, allowing them to provide further feedback, advice, and support for remaining healthy. In this study, we grouped four application areas in the SC5, which consist of Environmental conservation and heritage protection (Table A9); Healthcare (Table A12); Parks and recreation (Table  A14), and; Water management (Table A19). For Australia and Hong Kong, people with medium-high income (INC5) do not favour adopting and using AI in environmental conservation, heritage protection, and water management (Table  A9 and A19, respectively). Furthermore, we observed in Australia medium to high levels of support for the use of AI to address environmental challenges, healthcare, and water management among Australians who link machine learning with AI (see Tables A9, A12, A19). These findings may also hint at a distinction between individuals who truly comprehend how AI can be utilized and others who have only a vague concept of what AI is and how it may be used.
The two areas of challenge in SC5 are a lack of clarification on "whether/how AI will be utilized for the common/social benefit of all community members" (Table B9) and a lack of clarity on how the digital gap and technological disruption on disadvantaged communities will be addressed (Table B10). In Australia and Hong Kong, 64 and 77 % of respondents who associate AI technology with Machine learning believe that one of the most significant challenges associated with adopting this technology is a lack of clarity on whether AI will be used for the common/social good of all community members. These findings may indicate a lack of trust on the part of this group's members in the adoption of this technology by governments in this area.
In Hong Kong, participants who believe that the most significant disadvantage of AI technology is that this technology will be highly cost (DISAD1) do not think that the challenge the government will face will be associated with a lack of clarity on if/how AI will be used for the common/social good of all community members (coefficient -2.13) (see Table  B9). On the other hand, those participants from Hong Kong who believe that AI contributes by increasing free time for humans to complete different tasks (PROMI4) showed that they think that the biggest challenge in adopting AI technology will be associated with a "lack of clarity on if/how AI will be used for the common/social good of all community members" (probability of 94 %) (Table B9).
Furthermore, Hong Kong residents who believe AI should be used to assist citizens (PROMI 11) believe that the main challenge the government will face when adopting AI for social good is a "lack of clarity on how the digital gap and technological disruption on disadvantaged communities will be addressed" (see Table B10). Because the technology industry has traditionally been hesitant to promote workplace equality, it is not unexpected that this group considers this area to be a significant challenge when applying AI to social goods. According to respondents in Australia who believe that society will become worse because of the use of AI (SOCIETY3), a challenge for the government in adopting AI for the public good would be the lack of clarity on how the digital divide and technological disruption will be managed in disadvantaged communities (probability of 70 %) (See Table B10). It appears that those who believe that society will suffer unfavourable changes are more likely to think that when the government applies these new technologies, there will be a lack of equity and inclusion.

5-Findings and Discussion
Partly due to the rapid AI development, there have been more AI applications for urban services in recent years [57][58][59][60]. While successful AI applications are linked with people's perceptions, little is known regarding people's perception of integrating AI into urban services. To address this issue, this study explains people's behaviours and preferences regarding the most suitable urban services for the future application of AI technology and the key challenges for governments to adopt AI for urban services. The study considers the challenges and obstacles in AI-based services from a user's point of view. An empirical investigation of public perceptions from Australians and Hongkongers was conducted. The key findings of the study are as follows:  Attitudes toward AI applications and their ease of use have significant effects on forming an opinion on AI. For example, two-thirds of Australia participants and most participants from Hong Kong, who consider AI's fundamental ability to solve problems using data and programmed reasoning, believe the obstacles in implementing AI for public services are mainly due to a lack of transparency and community engagement in AI-based AI judgments.
 Initial thoughts regarding AI's purpose seem to significantly affect the perception of application areas and the adoption challenges of AI. About 96% of participants without prior experience with AI, that is, they may know AI clearly, believe that AI can be applied for aged-care and disabilities. Australians who lack an understanding of AI's fundamental concepts support more AI-based aged care and disability applications. In contrast, those who know more about AI are not that optimistic.
 Perception differences between Australian and Hongkongers in AI application areas are significant. Australians are more optimistic about AI applications. A quarter of research participants from Australia with exciting prospects on AI agree that companies will heavily depend on AI, while only 11% express similar beliefs in Hong Kong. Most Australians with postgraduate degrees trust that AI will be used in arts and culture, but many Hong Kong people are not optimistic.
 Perception differences between Australian and Hongkongers in government AI adoption challenges are insignificant. Compared to Hongkongers, Australians are more optimistic regarding their government's ability to deploy AI. 78% of Australians believe that a lack of transparency and community participation in AI-based judgments will be a major hurdle in implementing AI, but most Hong Kong residents view it as a major challenge.
Below, we elaborate on the factors that affect participants' perceptions of AI. This way, the study findings will inform local authorities, which deploy AI in urban services and offer directions for future research.

5-1-Factors Behind Different Perceptions on AI
The digital divide has been intensified due to the use of AI applications during the recent decade as some benefit more than others due to different access to AI technology [61]. For example, younger generations are familiar with digital tools like mobile phones and online games, while many older digital migrants are unfamiliar with digital tools. Older adults who have retired may have fewer financial resources or a lack of motivation to learn new technologies. Similar problems occurred among those with disabilities. This study sheds light on the digital divide as per individuals' knowledge, income, financial resources, education background's impact on people's perceptions of various AI applications, such as aged-care and disability, local governments, art, and culture. Some factors that impact public perceptions are discussed below.

5-1-1-Impact of Lack of Transparency on Perceptions of AI for Decision-making (SC1)
About 65% of participants from Australia and 91% from Hong Kong consider the obstacles of AI for public services are primarily due to a lack of transparency and community engagement. Humans may not understand what an algorithm does and cannot understand the outcome of an AI model. While there are many white-box models at present, many people, including those who teach AI in higher education, may still believe that all AI are the Black Box. As AI has changed so fast, except for those who always do research and keep an eye on AI development, we may not notice that many AI models have already raised their level of transparency to the white box.

5-1-2-Digital Divide and the Impact of Knowledge on Perceptions of AI for Aged-care and Disability (SC2 and SC5)
This study reveals that the more we know about AI, the less likely we believe AI can help aged-care and disability. 18% of Hongkongers who met AI humanoid robots before do not believe AI can be used for aged-care and disability. In sharp contrast, 96% of participants without prior experience with AI believe that AI can be applied for aged-care and disabilities. In Australia, respondents who learnt about AI from universities, courses, social media, and the internet do not believe AI can be applied for aged-care and disability. 84% of people aged 55-65 and 75-84 and 74% of unemployed people believe that AI can be applied for aged-care and disability.
Taking care of older and disabled people requires strong AI equipped with more than one capability area like humans. Many people who do not know AI or lack AI knowledge may have too much fantasy about AI. Some may have watched movies and TV series about AI humanoid robots and may think AI chatbots can communicate with us like humans, take care of the elderly like a maid, and water the plants in the garden. Nevertheless, most AI can mainly perform single weak AI tasks like predicting the prices, classifying images, and sending reminders for the elderly to take pills. For example, the AI chatbot used in many shopping malls, elderly houses and schools cannot understand most humans' questions as we may ask by using many different collocations and words, and it cannot understand the hidden meaning of humans. Technology as such is unlikely to appear soon.

5-1-3-Impact of Financial Resources on Perceptions of AI Adoption by Local Governments (SC3(
In Australia, 72-75% of participants who may not feel comfortable living or working in a fully autonomous place think that financial resources are likely to be a key challenge for adopting AI by local governments. In Hong Kong, 99% of unemployed people believe that financial resources can challenge the adoption of AI by local governments. We speculate that unemployed people with high living costs are more aware of the importance of financial resources in technology development and deployment as they face financial problems in many different aspects. We speculate that there is better financial protection when Australians are unemployed. Financial pressures for unemployed people are higher in Hong Kong.
While Australians and Hongkongers who have never interacted with AI are unlikely to see limited financial resources as a significant challenge for AI deployment, Australians are more optimistic than Hongkongers regarding their governments' ability to deploy AI. Again, this could be because Australia's financial pressures and associated stress are lower than Hong Kong's. Further study may be done through qualitative studies to investigate its reasons.

5-1-4-Impact of Income on Perceptions of AI for Education and Local Context on Perceptions of AI for Arts and Culture (SC4)
While AI applications often require huge expenses, we may expect that application of AI in education may be welcome less by the low-income group. It is quite surprising that 79% of respondents from Australia and 99% from Hong Kong with a medium-high income believe that AI should not be used in education. We speculate the reasons for this may not be linked with the expenses among these groups as they are wealthy. Nevertheless, many parents may have to spend extra time and money letting their kids join these extra classes. Some parents may even learn AI by themselves to ensure they know AI to help their kids and ensure kids' competitiveness. These technology migrants, however, often find it challenging to learn even though they are adults. As a result, they may consider incorporating AI in education as inappropriate or causing trouble.
Participants with postgraduate degrees in Australia believe that AI will be used in arts and culture with an 82% probability of adoption. Participants in Hong Kong are not optimistic on this aspect, though. Compared to Australians, Hong Kong people are not keen on culture and arts activities. As Hong Kong is a city with the most prolonged working hours, many people prefer to rest when they do not need to work. Those with more leisure time may have many other activities to choose from day to night-time, like shopping (open seven days and till night-time), engaging in various types of sports, watching movies online. Arts and culture are often not a top priority among many people. That is why Hong Kong was also known as 'culture dessert'. As many people do not have time to participate in arts and culturerelated activities, they are also likely, not aware of the relevant AI application or believe that AI will be used in arts and culture. The other reason is that the relatively low business values in arts and culture-related activities, development, and application of AI on arts and culture activities are not popular in Hong Kong.

5-2-Practical Implications to Public, Planning and Policy
Both investigated country contexts have different approaches to AI deployment. In Hong Kong the deployment of AI in Both investigated country contexts have different approaches to AI deployment. In Hong Kong, the deployment of AI in urban services is deliberately connected to a strategy of the local government meant to diversify the local economy and compete internationally [62]. Whereas in Australia, the deployment of AI in urban services is less local economy focused and more service efficiency and quality centred [63]. Therefore, in both countries, urban policy is one of the strongest drivers of public perception formation. Additionally, education is often considered a meaningful way to change people's thoughts. Nevertheless, when many medium-high income groups believe that AI should not be used in education, it is high time to study the underlying reasons. On the other hand, misunderstandings like AI technologies make Black Box decisions imply that continuous education and research are essential as there is fast development in AI technologies. Unlike subjects like English literature, rapid development in AI implies the importance of continuous updates and lifelong education. But how can we properly educate, for example, urban policymakers, city managers, planners, and the public about AI since this technology is so hard to understand even when one does not have a background in computer science or engineering. The recent rise of the explainable artificial intelligence (XAI) movement along with the need for sound AI strategies might help [64,65].
When governments provide most public services with AI, successful implementations and applications require public support. Public opinions collection regarding AI becomes necessary. An overview of the beliefs and attitudes on AI can ensure smooth AI implementation. This paper provides us with a general understanding of which types of people are more supportive of AI and challenges in AI implementation, which is helpful for urban services planning. Governments may raise fiscal expenditure on AI education to reduce public misunderstanding of AI capability, for example, due to a lack of transparency in AI. More grants and funding can be provided for AI research to develop more robust AI, enhance its ability to assist the aging population, provide innovative solutions for arts, culture, and education activities, and improve AI transparency in decision-making. Relevant education, continuous education funds, and fiscal policies can help achieve these goals.
Lastly, the study findings inform local authorities that deploy, or planning to adopt, AI in their urban services. Specifically, insights generated in this study help local governments identify the most appropriate urban planning and decision-making processes with the greatest potential to utilize AI services and applications, while taking public concerns into account. This sensitivity should also be preserved while developing and test piloting AI-related urban services and applications, as paying special attention to equitable deployment of AI in urban services will generate opportunity for wider public acceptance and adoption/utilization.

5-3-Limitations of the Study
The study generated invaluable insights into the public's perceptions of AI application areas and the adoption challenges of AI in urban services. However, the following limitations of the study should be noted: First, while the sample size is adequate for the survey, having more participants might have surfaced additional perspectives. Second, the study focused on two countries' contexts. While the statistical representation requirements were met, expanding the study to a larger number of countries might have provided extended insights. Third, there are some representation differences between the study participant characteristics and the actual resident characteristics of the case cities, which might have some impact on the results. Fourth, the study findings are only quantitatively assessed, the open-ended questions' answers are not factored in this paper, as this data will be analysed thematically and will be reported in another paper. However, the authors have read and checked all qualitative responses to make sure they are not contradictory to the findings reported in this paper. Next, there might be unconscious bias in interpreting the study findings. Lastly, our prospective studies will consider tackling these issues.

6-Conclusion
AI is highly popular across the public sector due to the efficiencies its applications generate in the delivery of government services. Among many existing and potential application areas in local governments, AI adoption in planning and delivery of urban services stands out. Contrary to its increasing importance and potential, AI adoption in urban services is still an understudied area, and in particular, the understanding of what users/public think about AI utilization in these services is limited. To bridge this research and knowledge gap, the study reported in this paper explored people's behaviors and preferences regarding the most suited urban services for application of AI technology and the challenges for governments to adopt AI for urban service delivery. The analysis of the survey data collected from the public in Australia and Hong Kong revealed the following invaluable findings. First, it is found that attitudes toward AI applications and ease of use have significant effects on the public's forming an opinion on AI. Second, the public's initial thoughts regarding the meaning of AI have a significant impact on AI application areas and their adoption challenges. Third, not surprisingly, public perception differences in AI application areas between the case country contexts of Australia and Hong Kong are significant, highlighting the geopolitical context-driven nature of technology adoption. Lastly, however, the perception differences between the public in Australia and Hong Kong regarding government AI adoption challenges are minimal-that is, they affirm somehow the universality of the adoption barriers. These findings, while shedding light, contributing to bridging the knowledge gap, and informing local authorities that deploy or plan to adopt AI in their urban services, indicate that further empirical research is needed to better understand user/public acceptance and adoption barriers of AI. Along with this, the challenges faced by local governments practicing responsible AI principles need to be investigated. Our prospective studies will concentrate on these two critical research topics.

7-2-Data Availability Statement
Data sharing is not applicable to this article.

7-3-Funding
This research was funded by the Australian Research Council Discovery Grant Scheme, grant number DP220101255.

7-4-Acknowledgements
The authors thank the editor and two anonymous referees for their constructive comments. The authors are also grateful to all study participants for sharing their perspectives.

7-5-Ethical Approval
An ethical approval is obtained from Queensland University of Technology's Human Research Ethics Committee (Approval No: 2000000257). Survey participants provided their consent to participate in the study and publication of their views by agreeing on a statement on that matter at the beginning of the questionnaire.

7-6-Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this manuscript. In addition, the ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancies have been completely observed by the authors.