Figure 1: The BioChatter composable platform architecture (simplified).
-Many questions arise in daily biomedical research practice, for instance, interpretation of experimental results or the use of a web resource (top left).
+LLMs can facilitate many tasks in daily biomedical research practice, for instance, interpretation of experimental results or the use of a web resource (top left).
BioChatter’s main response circuit (blue) composes a number of specifically engineered prompts and passes them (and a conversation history) to the primary LLM, which generates a response for the user based on all inputs.
This response is simultaneously used to prompt the secondary circuit (orange), which fulfils auxiliary tasks to complement the primary response.
In particular, using search, the secondary circuit queries a database as a prior knowledge repository and compares annotations to the primary response, or uses the knowledge to perform Retrieval-Augmented Generation (RAG).
@@ -332,10 +332,22 @@ Introduction
Results
BioChatter (https://github.com/biocypher/biochatter ) is a Python framework that provides an easy-to-use interface to interact with LLMs and auxiliary technologies via an intuitive API (application programming interface).
This way, its functionality can be integrated into any number of user interfaces, such as web apps, command-line interfaces, or Jupyter notebooks (Figure 2 ).
+The framework is designed to be modular: any of its components can be exchanged with other implementations (Figure 1 ).
+These functionalities include:
+
+basic question-answering with LLMs hosted by providers (such as OpenAI) as well as locally deployed open-source models
+reproducible prompt engineering to guide the LLM towards a specific task or behaviour
+knowledge graph (KG) querying with automatic integration of any KG created in the BioCypher framework [15 ]
+retrieval-augmented generation (RAG) using vector database embeddings of user-provided literature
+model chaining to orchestrate multiple LLMs and other models in a single conversation using the LangChain framework [16 ]
+fact-checking of LLM responses using a second LLM
+benchmarking of LLMs, prompts, and other components
+
+In the following, we briefly describe these components, which are demonstrated in our web apps (https://chat.biocypher.org ).
-
+
Figure 2: The BioChatter framework architecture.
A) The BioChatter framework components (blue) connect to knowledge graphs and vector databases (orange).
Users (green) can interact with the framework via its Python API, via the lightweight Python frontend using Streamlit (BioChatter Light), or via a fully featured web app with client-server architecture (BioChatter Next).
@@ -343,28 +355,15 @@ Results
B) Different use cases of BioChatter on a spectrum of tradeoff between simplicity/economy (left) and security (right).
Economical and simple solutions involve proprietary services that can be used with little effort but are subject to data privacy concerns.
Increasingly secure solutions require more effort to set up and maintain, but allow the user to retain more control over their data.
-Fully local solutions are available given sufficient hardware (starting with contemporary laptops), but are not highly scalable.
+Fully local solutions are available given sufficient hardware (starting with contemporary laptops), but are not scalable.
-The framework is designed to be modular: any of its components can be exchanged with other implementations (Figure 1 ).
-These functionalities include:
-
-basic question-answering with LLMs hosted by providers (such as OpenAI) as well as locally deployed open-source models
-reproducible prompt engineering to guide the LLM towards a specific task or behaviour
-benchmarking of LLMs, prompts, and other components
-knowledge graph (KG) querying with automatic integration of any KG created in the BioCypher framework [15 ]
-retrieval-augmented generation (RAG) using vector database embeddings of user-provided literature
-model chaining to orchestrate multiple LLMs and other models in a single conversation using the LangChain framework [16 ]
-fact-checking of LLM responses using a second LLM
-
-In the following, we briefly describe these components, which are demonstrated in our web apps (https://chat.biocypher.org ).
Question Answering and LLM Connectivity
The core functionality of BioChatter is to interact with LLMs.
The framework supports both leading proprietary models such as the GPT series from OpenAI as well as open-source models such as LLaMA2 [17 ] and Mixtral 8x7B [18 ] via a flexible open-source deployment framework [19 ] (see Methods).
Currently, the most powerful conversational AI platform, ChatGPT (OpenAI), is surrounded by data privacy concerns [20 ] .
-We address this issue in two ways.
-Firstly, we provide access to the different OpenAI models through their API, which is subject to different, more stringent data protection than the web interface [21 ] , most importantly by disallowing reuse of user inputs for subsequent model training.
-Secondly, we aim to preferentially support open-source LLMs to facilitate more transparency in their application and increase data privacy by being able to run a model locally on dedicated hardware and end-user devices [22 ] .
+To address this issue, we provide access to the different OpenAI models through their API, which is subject to different, more stringent data protection than the web interface [21 ] , most importantly by disallowing reuse of user inputs for subsequent model training.
+Further, we aim to preferentially support open-source LLMs to facilitate more transparency in their application and increase data privacy by being able to run a model locally on dedicated hardware and end-user devices [22 ] .
By building on LangChain [16 ] , we support dozens of LLM providers, such as the Xorbits Inference and Hugging Face APIs [19 ] , which can be used to query any of the more than 100 000 open-source models on Hugging Face Hub [23 ] , for instance those on its LLM leaderboard [24 ] .
Although OpenAI’s models currently vastly outperform any alternatives in terms of both LLM performance and API convenience, we expect many open-source developments in this area in the future [25 ] .
Therefore, we support plug-and-play exchange of models to enhance biomedical AI readiness, and we implement a bespoke benchmarking framework for the biomedical application of LLMs.
@@ -374,67 +373,63 @@ Prompt Engineering
Current approaches are mostly trial-and-error-based manual engineering, which is not reproducible and changes with every new model [25 ] .
To address this issue, we include a prompt engineering framework in BioChatter that allows the preservation of prompt sets for specific tasks, which can be shared and reused by the community.
In addition, to facilitate the scaling of prompt engineering, we integrate this framework into the benchmarking pipeline, which enables the automated evaluation of prompt sets as new models are published.
+Knowledge Graphs
+KGs are a powerful tool to represent and query knowledge in a structured manner.
+With BioCypher [15 ] , we have developed a framework to create KGs from biomedical data in a user-friendly manner while also semantically grounding the data in ontologies.
+BioChatter is an extension of the BioCypher ecosystem, elevating its user-friendliness further by allowing natural language interactions with the data; any BioCypher KG is automatically compatible with BioChatter.
+We use information generated in the build process of BioCypher KGs to tune BioChatter’s understanding of the data structures and contents, thereby increasing the efficiency of LLM-based KG querying (see Methods).
+In addition, the ability to connect to any BioCypher KG allows the integration of prior knowledge into the LLM’s retrieval, which can be used to ground the model’s responses in the context of the KG via in-context learning / retrieval-augmented generation, which can facilitate human-AI interaction via symbolic concepts [7 ] .
+We demonstrate the user experience of KG-driven interaction in Supplementary Note 1: Knowledge Graph Retrieval-Augmented Generation and on our website (https://biochatter.org/vignette-kg/ ).
+Retrieval-Augmented Generation
+LLM confabulation is a major issue for biomedical applications, where the consequences of incorrect information can be severe.
+One popular way of addressing this issue is to apply “in-context learning,” which is also more recently referred to as “retrieval-augmented generation” (RAG) [28 ] .
+Briefly, RAG relies on injection of information into the model prompt of a pre-trained model and, as such, does not require retraining / fine-tuning; once created, any RAG prompt can be used with any LLM.
+While this can be done by processing structured knowledge, for instance, from KGs, it is often more efficient to use a semantic search engine to retrieve relevant information from unstructured data sources such as literature.
+By incorporating the management and integration of vector databases in the BioChatter framework, we allow the user to connect to a vector database, embed an arbitrary number of documents, and then use semantic search to improve the model prompts by adding text fragments relevant to the given question (see Methods).
+We demonstrate the user experience of RAG in Supplementary Note 2: Retrieval-Augmented Generation and on our website (https://biochatter.org/vignette-rag/ ).
+Model Chaining and Fact Checking
+LLMs cannot only seamlessly interact with human users, but also with other LLMs as well as many other types of models.
+They understand API calls and can therefore theoretically orchestrate complex multi-step tasks [29 ,30 ] .
+However, implementation is not trivial and the complex process can lead to unpredictable behaviours.
+We aim to improve the stability of model chaining in biomedical applications by developing bespoke approaches for common biomedical tasks, such as interpretation and design of experiments, evaluating literature, and exploring web resources.
+While we focus on reusing existing open-source frameworks such as LangChain [16 ] , we also develop bespoke solutions where necessary to provide stability for the given application.
+As an example, we implemented a fact-checking module that uses a second LLM to evaluate the factual correctness of the primary LLM’s responses continuously during the conversation (see Methods).
Benchmarking
The increasing generality of LLMs poses challenges for their comprehensive evaluation.
Specifically, their ability to aid in a multitude of tasks and their great freedom in formatting the answers challenge their evaluation by traditional methods.
To circumvent this issue, we focus on specific biomedical tasks and datasets and employ automated validation of the model’s responses by a second LLM for advanced assessments.
For transparent and reproducible evaluation of LLMs, we implement a benchmarking framework that allows the comparison of models, prompt sets, and all other components of the pipeline.
-The generic Pytest framework [28 ] allows for the automated evaluation of a matrix of all possible combinations of components.
+The generic Pytest framework [31 ] allows for the automated evaluation of a matrix of all possible combinations of components.
The results are stored and displayed on our website for simple comparison, and the benchmark is updated upon the release of new models and extensions to the datasets and BioChatter capabilities (https://biochatter.org/benchmark/ ).
-We create a bespoke biomedical benchmark for multiple reasons:
-1) The biomedical domain has its own tasks and requirements, and creating a bespoke benchmark allows us to be more precise in the evaluation of components [25 ] .
-2) We aim to create benchmark datasets that are complementary to the existing, general-purpose benchmarks and leaderboards for LLMs [24 ,29 ,30 ] .
-3) We aim to prevent leakage of the benchmark data into the training data of the models, which is a known issue in the general-purpose benchmarks, also called memorisation or contamination [31 ] .
-To achieve this goal, we implemented an encrypted pipeline that contains the benchmark datasets and is only accessible to the workflow that executes the benchmark (see Methods).
-Current results confirm the prevailing opinion of OpenAI’s leading role in LLM performance (Figure 3 A).
+
Since the biomedical domain has its own tasks and requirements, we created a bespoke benchmark that allows us to be more precise in the evaluation of components [25 ] .
+This is complementary to the existing, general-purpose benchmarks and leaderboards for LLMs [24 ,32 ,33 ] .
+Furthermore, to prevent leakage of the benchmark data into the training data of the models, a known issue in the general-purpose benchmarks [34 ] , we implemented an encrypted pipeline that contains the benchmark datasets and is only accessible to the workflow that executes the benchmark (see Methods).
+Analysis of these benchmarks confirmed the prevailing opinion of OpenAI’s leading role in LLM performance (Figure 3 A).
Since the benchmark datasets were created to specifically cover functions relevant in BioChatter’s application domain, the benchmark results are primarily a measure of the LLMs’ usefulness in our applications.
OpenAI’s GPT models (gpt-4 and gpt-3.5-turbo) lead by some margin on overall performance and consistency, but several open-source models reach high performance in specific tasks.
-Remarkably, while the newer version (0125) of gpt-3.5-turbo outperforms the previous version (0613) of gpt-4, version 0125 of gpt-4 shows a significant drop in performance.
+Of note, while the newer version (0125) of gpt-3.5-turbo outperforms the previous version (0613) of gpt-4, version 0125 of gpt-4 shows a significant drop in performance.
The performance of open-source models appears to depend on their quantisation level, i.e., the bit-precision used to represent the model’s parameters.
For models that offer quantisation options, performance apparently plateaus or even decreases after the 4- or 5-bit mark (Figure 3 A).
-There is no apparent correlation between model size and performance (Pearson’s r = 0.171).
-To evaluate the benefit of BioChatter functionality, we compare the performance of models with and without the use of BioChatter’s prompt engine for KG querying.
+There is no apparent correlation between model size and performance (Pearson’s r = 0.171, p = 9.59e-05).
+To evaluate the benefit of BioChatter functionality, we compared the performance of models with and without the use of BioChatter’s prompt engine for KG querying.
The models without prompt engine still have access to the BioCypher schema definition, which details the KG structure, but they do not use the multi-step procedure available through BioChatter.
Consequently, the models without prompt engine show a lower performance in creating correct queries than the same models with prompt engine (0.444±0.11 vs. 0.818±0.11, unpaired t-test P < 0.001, Figure 3 B).
-
+
Figure 3: Benchmark results.
A) Performance of different LLMs (indicated by colour) on the BioChatter benchmark datasets; the y-axis value indicates the average performance across all tasks for each model/size.
X-axis jittered for better visibility.
While the closed-source models from OpenAI mostly show highest performance, open-source models can perform comparably, but show high variance.
-Measured performance does not correlate intuitively with size (indicated by point size) and quantisation (bit-precision) of the models.
-Some smaller models perform better than larger ones, even within the same model family; while very low bit-precision (2-bit) expectedly yields worse performance, the same is true for the high end (8-bit).
-Remarkably, while the newer (0125) version of GPT-3.5-turbo outperforms the previous version (0613) of GPT-4, the newer version of GPT-4 shows a significant drop in performance.
-*: Of note, many characteristics of OpenAI models are not public, and thus their bit-precision (as well as the exact size of GPT4) is subject to speculation.
+Measured performance does not seem to correlate with size (indicated by point size) and quantisation (bit-precision) of the models.
+*: Of note, many characteristics of OpenAI models are not public, and thus their bit-precision (as well as the exact size of gpt-4) is subject to speculation.
B) Comparison of the two benchmark tasks for KG querying show the superior performance of BioChatter’s prompt engine (0.818±0.11 vs. 0.444±0.11, unpaired t-test P < 0.001).
The test includes all models, sizes, and quantisation levels, and the performance is measured as the average of the two tasks.
The BioChatter variant involves a multi-step procedure of constructing the query, while the “naive” version only receives the complete schema definition of the BioCypher KG (which BioChatter also uses as a basis for the prompt engine).
The general instructions for both variants are the same, otherwise.
-Knowledge Graphs
-KGs are a powerful tool to represent and query knowledge in a structured manner.
-With BioCypher [15 ] , we have developed a framework to create KGs from biomedical data in a user-friendly manner while also semantically grounding the data in ontologies.
-BioChatter is an extension of the BioCypher ecosystem, elevating its user-friendliness further by allowing natural language interactions with the data; any BioCypher KG is automatically compatible with BioChatter.
-We use information generated in the build process of BioCypher KGs to tune BioChatter’s understanding of the data structures and contents, thereby increasing the efficiency of LLM-based KG querying (see Methods).
-In addition, the ability to connect to any BioCypher KG allows the integration of prior knowledge into the LLM’s retrieval, which can be used to ground the model’s responses in the context of the KG via in-context learning / retrieval-augmented generation, which can facilitate human-AI interaction via symbolic concepts [7 ] .
-We demonstrate the user experience of KG-driven interaction in Supplementary Note 1: Knowledge Graph Retrieval-Augmented Generation and on our website (https://biochatter.org/vignette-kg/ ).
-Retrieval-Augmented Generation
-LLM confabulation is a major issue for biomedical applications, where the consequences of incorrect information can be severe.
-One popular way of addressing this issue is to apply “in-context learning,” which is also more recently referred to as “retrieval-augmented generation” (RAG) [32 ] .
-Briefly, RAG relies on injection of information into the model prompt of a pre-trained model and, as such, does not require retraining / fine-tuning; once created, any RAG prompt can be used with any LLM.
-While this can be done by processing structured knowledge, for instance, from KGs, it is often more efficient to use a semantic search engine to retrieve relevant information from unstructured data sources such as literature.
-By incorporating the management and integration of vector databases in the BioChatter framework, we allow the user to connect to a vector database, embed an arbitrary number of documents, and then use semantic search to improve the model prompts by adding text fragments relevant to the given question (see Methods).
-We demonstrate the user experience of RAG in Supplementary Note 2: Retrieval-Augmented Generation and on our website (https://biochatter.org/vignette-rag/ ).
-Model Chaining and Fact Checking
-LLMs cannot only seamlessly interact with human users, but also with other LLMs as well as many other types of models.
-They understand API calls and can therefore theoretically orchestrate complex multi-step tasks [33 ,34 ] .
-However, implementation is not trivial and the complex process can lead to unpredictable behaviours.
-We aim to improve the stability of model chaining in biomedical applications by developing bespoke approaches for common biomedical tasks, such as interpretation and design of experiments, evaluating literature, and exploring web resources.
-While we focus on reusing existing open-source frameworks such as LangChain [16 ] , we also develop bespoke solutions where necessary to provide stability for the given application.
-As an example, we implement a fact-checking module that uses a second LLM to evaluate the factual correctness of the primary LLM’s responses continuously during the conversation (see Methods).
Discussion
The fast pace of developments around current-generation LLMs poses a great challenge to society as a whole and the biomedical community in particular [35 ,36 ,37 ] .
While the potential of these models is enormous, their application is not straightforward, and their use requires a certain level of expertise [38 ] .
@@ -460,21 +455,21 @@
Limitations
While we have taken steps to mitigate the risks of using LLMs such as independent benchmarks, fact-checking, and knowledge graph querying, we cannot guarantee that the models will not produce harmful outputs.
We see current LLMs, particularly in the scope of the BioCypher ecosystem, as helpful tools to assist human researchers, alleviating menial and repetitive tasks and helping with technical aspects such as query languages.
They are not meant to replace human ingenuity and expertise but to augment it with their complementary strengths.
-Depending on generic open-source libraries such as LangChain [16 ] and Pytest [28 ] allows us to focus on the biomedical domain but also introduces technical dependencies on these libraries.
+
Depending on generic open-source libraries such as LangChain [16 ] and Pytest [31 ] allows us to focus on the biomedical domain but also introduces technical dependencies on these libraries.
While we support those upstream libraries via pull requests, we depend on their maintainers for future updates.
In addition, keeping up with these rapid developments is demanding on developer time, which is only sustainable in a community-driven open-source effort.
For the continued relevance of our framework, it is essential that its components, such as the benchmark, are maintained as the field evolves.
Future directions
-Multitask learners that can synthesise, for instance, language, vision, and molecular measurements, are an emerging field of research [41 ,42 ,43 ] .
-To remain accessible in the face of ever increasing complexity of these models, we will focus on the usability improvements that allow broad adoption in biomedical research.
-Autonomous agents for trivial tasks have already been developed on the basis of LLMs, and we expect this field to mature in the future [34 ] .
-As research on agent behaviour progresses, we will integrate these developments into the BioChatter framework to support the creation of helpful assistants for biomedical research.
+Multitask learners that can synthesise, for instance, language, vision, and molecular measurements are an emerging field of research [41 ,42 ,43 ] .
+Autonomous agents for trivial tasks have already been developed on the basis of LLMs, and we expect this field to mature in the future [30 ] .
+As research on multimodal learning and agent behaviour progresses, we will integrate these developments into the BioChatter framework.
+Remaining accessible in the face of ever increasing complexity of models and workflows requires continuous maintenance and usability improvements to allow broad adoption in biomedical research.
All framework developments will be performed in light of the ethical implications of LLMs, and we will continue to support the use of open-source models to increase transparency and data privacy.
While we focus on the biomedical field, the concept of our frameworks can easily be extended to other scientific domains by adjusting domain-specific prompts and data inputs, which are accessible in a composable and user-friendly manner in our frameworks [15 ] .
-Our Python library is developed openly on GitHub (https://github.com/biocypher/biochatter ) and can be integrated into any number of user interface solutions.
+Our Python library is developed openly on GitHub (https://github.com/biocypher/biochatter ) and can be integrated into any downstream user interface solution.
We develop under the permissive MIT licence and encourage contributions and suggestions from the community with regard to the addition of bioinformatics tool integrations, prompt engineering, benchmarking, and any other feature.
(Supplementary / Online) Methods
-BioChatter is a Python library, supporting Python 3.10-3.12, which we ensure with a continuous integration pipeline on GitHub (https://github.com/biocypher/biochatter ).
+
BioChatter (version 0.4.7 at the time of publication) is a Python library, supporting Python 3.10-3.12, which we ensure with a continuous integration pipeline on GitHub (https://github.com/biocypher/biochatter ).
We provide documentation at https://biochatter.org , including a tutorial and API reference.
All packages are developed openly and according to modern standards of software development [44 ] ; we use the permissive MIT licence to encourage downstream use and development.
We include a code of conduct and contributor guidelines to offer accessibility and inclusivity to all who are interested in contributing to the framework.
@@ -491,7 +486,7 @@ Applications
To provide seamless integration of the BioChatter backend into existing frontend solutions, we provide the server implementation at https://github.com/biocypher/biochatter-server and as a Docker image in our Docker Hub organisation (https://hub.docker.com/repository/docker/biocypher/biochatter-server ).
We invite all interested researchers to select the framework that best suits their needs, or use the BioChatter server or library in their existing solutions.
Benchmarking
-The benchmarking framework examines a matrix of component combinations using the parameterisation feature of Pytest [28 ] .
+
The benchmarking framework examines a matrix of component combinations using the parameterisation feature of Pytest [31 ] .
This implementation allows for the automated evaluation of all possible combinations of components, such as LLMs, prompts, and datasets.
We performed the benchmarks on a MacBook Pro with an M3 Max chip with 40-core GPU and 128GB of RAM.
As a default, we ran each test five times to account for the stochastic nature of LLMs.
@@ -550,7 +545,7 @@
Retrieval-Augmented Generation
Regardless of whether the initial answer is correct, it is likely that the “fake answer” is more semantically similar to the relevant pieces of information than the user’s question [45 ] .
Semantic search results (for instance, single sentences directly related to the topic of the question) are then sufficiently small to be added to the prompt.
In this way, the model can learn from additional context without the need for retraining or fine-tuning.
-This method is sometimes described as in-context learning [32 ] or retrieval-augmented generation [46 ] .
+This method is sometimes described as in-context learning [28 ] or retrieval-augmented generation [46 ] .
To provide access to this functionality in BioChatter, we implement classes for the connection to, and management of, vector database systems (in the vectorstore.py
module), and for performing semantic search on the vector database and injecting the results into the prompt (in the vectorstore_agent.py
module).
An analogous implementation for KG retrieval is available in the database_agent.py
module.
Both retrieval mechanisms are integrated and provided to the BioChatter API via the rag_agent.py
module.
@@ -594,7 +589,7 @@
Disclaimer
Neither the European Union nor the granting authority can be held responsible for them.
This work was also partly supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract No. 22.00115.
Conflict of Interest
-JSR reports funding from GSK, Pfizer and Sanofi and fees/honoraria from Travere Therapeutics, Stadapharm, Pfizer, Grunenthal, and Astex Pharmaceuticals.
+JSR reports funding from GSK, Pfizer and Sanofi and fees/honoraria from Travere Therapeutics, Stadapharm, Pfizer, Grunenthal, Owkin, and Astex Pharmaceuticals.
References
@@ -679,26 +674,26 @@
References
27.
Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 Sondos Mahmoud Bsharat, Aidar Myrzakhan, Zhiqiang Shen
arXiv (2023)
https://doi.org/gtdnfg
+
+
28.
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
arXiv (2023)
https://doi.org/gskd97
+
+
+
29.
Gorilla: Large Language Model Connected with Massive APIs Shishir G Patil, Tianjun Zhang, Xin Wang, Joseph E Gonzalez
arXiv (2023)
https://doi.org/gtbgvm
+
+
+
30.
A Survey on Large Language Model based Autonomous Agents Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, … Ji-Rong Wen
arXiv (2023)
https://doi.org/gsv93m
+
-
29.
Large language models encode clinical knowledge Karan Singhal, Shekoofeh Azizi, Tao Tu, SSara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, … Vivek Natarajan
Nature (2023-07-12)
https://doi.org/gsgp8c
+
32.
Large language models encode clinical knowledge Karan Singhal, Shekoofeh Azizi, Tao Tu, SSara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, … Vivek Natarajan
Nature (2023-07-12)
https://doi.org/gsgp8c
-
31.
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre
arXiv (2023)
https://doi.org/gtbgvp
-
-
-
32.
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
arXiv (2023)
https://doi.org/gskd97
-
-
-
33.
Gorilla: Large Language Model Connected with Massive APIs Shishir G Patil, Tianjun Zhang, Xin Wang, Joseph E Gonzalez
arXiv (2023)
https://doi.org/gtbgvm
-
-
-
34.
A Survey on Large Language Model based Autonomous Agents Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, … Ji-Rong Wen
arXiv (2023)
https://doi.org/gsv93m
+
34.
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre
arXiv (2023)
https://doi.org/gtbgvp
35.
There are holes in Europe’s AI Act — and researchers can help to fill them Nature
(2024-01-10)
https://doi.org/gtdnfb
diff --git a/manuscript.pdf b/manuscript.pdf
index 4eaf62f6c23cd44617bc0edac486c4433df3e0d9..698ac231251e0ac0fbb43c5544f257dcfe44e82c 100644
GIT binary patch
delta 87189
zcmZU)b8uim(=Qy`ww-Kj+jg?CZR^D5#v5m2+qUg&?2VI+b@zGSr@s1b-Ky@X`JL{b
zo
-BlV=t;>Nb}HY@+P%UY%e53O|xKmxm~+uHf<@_e!vreLafJK(Jr
zn-Z%yE(R;$c@x;)4VpU(GR#JhEX_TE<`qgbW?A57tr(jYX3DBbI2?N<6d
z@66^_7XsH8oa&g`Wqt*=RE+7D&Ef*osUu)?58iEc_UraAYU}_$CPC!I3-D92&$U0o
zAN)!I@`fb5D}{46LX>&RQ$01e9L3jlQzI6N29GUSzptF3iyc0SMl&bQ*@jR@vXRBsUHwn&Tl
z6QCg>6Yvqo#kue%A$^e%v%F>&F4L-wO#rh1_|x~>q`gE{+v+KeyfvFpYN)n+=X8?D
z;LPO_)nCHS)jldSaD|(#WU9Q$=GAUR!$Lu9rlG9s)G)y^Po6rZEF;;A8M2hIa=VKB
z-WA;swUDG_UPon9Ijs2cmn^vcM+yyL{aj?UGxe8G(l!ij)O$<|mNc%Y+onv2L6{XX
zkYcw}$@&G2oMDhD*-BT|whY@iRHlz`H`G?Cje~qo=2@8-R6JvI<`LX<68Wrta<{X)
zbNC2h9MgN>>?KmmM8ftV_SJv>`c%u*3+_nH4i=Y{6?&%CFM(b_Rep#j5;-0H*-`Rc
zZ%r6HQSp55v?zCHw!;EVyLS!6fn=2zXt%JH^_->{i+Ygoo1T6h)}*dxIR_=a^^>})
zz`e9uX49089^gA=#$bQ!Au&+SGU0!0n1tsjwK`s$|2EUt1+yDV6^q&ojbt%|
z9x|F0XV-pUqBrN$`jY@BPGxXtbx0Riad{FBj6iy_@F-F+@x(y3DiKID*VYYD!SCoeaHn&b*k}F7<
z!OVFl=3>U^1X;Mj{m+mES&-L0WoJ`&HC4*nM}Us2pbeOn9hlZTVI8SteLo%`v|4tU
z6-N)!j{iX4Np|23F57hi9`*Zjofih~Inyv#l5JC<(AZsk6j3{CI?2m>d&sRGge6o*)~eAq1r1$
z5{f9^mu0OnHSPPTO+%kbo+MHkp%={cq5d9)31XJ3`2R|cTxu!>rlT?hCzqyISn^G1
zJ<0X4S&(5r_ir|p_xOw@>DSvOf9C(L&M9bA<|Eul41mTioa$hYY4rB0bV42jZ_+bj
z*P7$m!$dH*j0e{Qy)3H^AUCkiM(mun?4z8vYTg_f00C!+kI(o;*SA5dEZ)HTsi$RB
z8;JQRj^FTi!WxnY*y=anfpIc5G3-zurc6yxnHPwo(+K)druR!FwNhKuX)BrZ#Y0L!<*6@Juj12qj`XLfl+>A^O=rmmVQy{xUTWg4%W1b()jm8H
z8qK?A%F7-Kaj+U{j)%IxY}i$yufx^i26;oyJE(xO3DPkDgp;@kb~h=*Z12sZ0>9~i
z;P-m+!cWkHeFWWO<^!u4%3VGcb
zurF3B+Ur)92&YQ_)JCC9~F9W!niM4@N$!od=C
zbRZ;QoMaabh?4t&o_LmhR#?lLeLxP*;8td-KjLEnxl`M+C4(91D&M@?5U>8jl@Kmjj6Vf;LgRm+_&;fMyyT9KNEv5CC`G>Fuk8RSkyqOQgow}DS
zizR;Re&2oB+!YjJ`KtC5k2pXtQ*1Nj~U@RQv26HvwG}5
zmjuO*XcX>^aEzekK+B?kD0zIBd`P
zSg0K2(@zt@V?|ysw+{3*
zG&d%Z<1le4`5i0^`v%eew!F*F^DMF$*j9=hLOn!FllL^l(teMJrtvnnFd-j2ab+r_
z(!j@^C4j=w%DOdB8OOB5sQz7$$U^F*&6R=jOw1<_V}&md^Lqze9;Qs0N<11CDELJ%
zucF8eR|Q9JRwh8nEngW%!dWB$cDWRQ3Q^{6nBHG~Q;@H=h(QE~*3vZ)0k*vwNN%_%
zCOoW~wl7wqkQqjokbrG^iVP;&cO-m$Z1c0P&PS5X0uGttK}gc3T^5cK4PP
zNo$Yv=TA3$1UpNmBACWDm3k&G9dgLFes*q_cnjyVPCtX|95X#=J$!
zjN7PnI}LfqQMUFNeSy;y?#{*p9{;k*gR^Rsi~i;~UV=14F7Vmakx80HHUnvJiKZU<
zy#C6R-CxP12u#3@+41LJaMjjl&bf)P!-;@B&k5Z2;Z|$(lvNwg*8)%8Cej>MBPhv+
zj$>K4HR>^|r=U9Utli0wOHvR;j^_}J!4pdDEM+y1UD&yATeHq_H&P5lf}HtA!LBvH
z$zSi#?&$M8^v)Ll+a61jP`CZH6ajOMm^P|krp->R|!4dBwG|KOwngeaJ?LcS1~E+WQyiZdkd-1*Erw@MKub(JS>KH1TCq32(3;55;J8H
z8QvjGNoZ3%ZID3~bMa>rIahaD{O#N$FWwU0ipnoR5dHAhffSS9qWjVtqWe_g(JwE|Yx
zA;oypK2*PR;<45J!(%@4zmJakZ{K=~^i1x;bb|U^fZeOrzJGk;lbBw@9_Q+4&2FVk
z?e`v7Tn)hITjSr!YasJp3#m^X^KoDMWyCKh+-`k}T{r1l_lI!-@wJ{811|j?4UR<5
z%_+<-Id$?6V$o`~6#2Im%f9}KU4XMlwl^0f{rOIhg0B%mr4C#yAEO8hA~x
z6V}Z!B@c=boJDH$>?h}xh`O4$Q42PmI}-%I*>5wwnFBz>D=aDViB5iNib69U8C`>I
z=(_<^DKv5wmX_1&eZuUt&C2?;F+ugur(4){M_we%SUdYyF+zVmPHsDy*5tB*eWJRc
z`rAYj{A0Xc-Hz_}m!Q5iO@LtMuic==-u260ns=zQK&L&4)NXgd&a_UiAmEb3_kcO8
z>hd2&Gnb?-hb4|Cb)CMMc&|v9Sp>By}vw?T@$aFenmPy4+7||wg{`f^?Z;)O%Jp%
z3HgUt7!azhr6gUmPdWc$eS7~2DdF)4p>(mm+gS5MF?nP6WCudzehU$2{`yR;W`k=SyGYw6ub4%F`+t(dfW5OD
zWGjb;SvDIfaG@Uq?&b0RGt>78eP$h_}3
z)3hrO)pA)aJ)PeIR_x4qMYyR=?VdtytO(@jz18(SFaftQ%bSJm@Yej$|3V+e!;r1D
z>1=9JQ;`x~M^Q>|)Dt0V?I$7<4c}8N3kjc0(yU*p4$&(m)v7`(N;x5B!(Gx-E^7_h
zSw<5ryNB=D3-DButPWLASC8XeK)n7qt!W8soz7}eQ%TQ-7I&ujEvu!snz*D
zrqu~?z?pBs9FU$7-K1b*Bg|l#l@UE;Z$h`=V`9VX3~B#ZN+@;F>^79dzFIK8U5#Y?
zcy-Ro4d(>a)$Wx1&*-0OTPbu@x%3uQ2^dg~FjWPj_^ya++7ea%pC6%9PX00n$zHk?
zWX9-x3+{r0=|Y3v&fw28AGNo5Q?S$kz0u9g4u4xym_O4dlSDyL5VU1Y?9%eN=Qf_V
z3C;mix2bD@Z5{ZZgg;Zo$4?{vUpTOny3(y^fJ3o*lifmENa}Ln*f?_
z8^oyKD3%Ux1SsTOd;jmSh7fFI(e?hxb7hZzSwFm?cAM?9C9W}&Bde95cG4-V!Smm(
zRQWOKMhMU@)LhxCtRr*3KTY_f%+o|XpOZ(tt;e%?bygob!%>w2Z#--6?mDc))D6dh_m63Ipdyp5uWFhKiKo
z3>U%eBN8WlV7egly$f-cJ=4Z#kXncZ>zDj-Zv^(V*=Oby0DYY`{?3;91=KA6&jbk7
z`iJN?g{)Ws3}c@QVy4XAf#uOo{tS-;Pv(bK&+ux53DrQ3@K<=iofoqjFi#!{aN5
zXYPP$%$VMr$mtxhJ7`}|B&a$}>}_LyyV|vLQ0}(KYs~K&`l_L5B-u~4Cf)(Yehih`
zG+`3m45O%6EV#r*K-G#t)ad|Y07u{if&pKF$c~yeOd5CB^Kg-`;fy|?$i#{*X7qXz
z4pD;$vDZlFkeK0;FfDoYg&5NcQ3c$9OmlDFh46k``F?Zz2sGsTc1F%DKZ!85@Z4l;
zmKCOmIts*j)k&D`8hgFQtr+g?0a0&cC^0Fu*u|XE&|tGGz+CziF|!HH8IprZ>OgQ{
z5!gJ@Mdw$;2SQ)8c}d>NvIM#D+)aHX3`>sc~2}SV&c^m?h
zn4%HLnS}WXRzNHs_!zE!W`k86Pd1WM382Wm5v8T%iH%qutfI7Z%Ch1uaIy(j?0#fw
zyr$oQcu=hYqv*Dbgzfvk`CoVYi$%w|q9&?UX$aQmnwDsl$s(?Qbl2J~HKqq88DJx|
zyzx;93ZQfm^Gtf!c(Z`sc%|3j?a~5CX+{qRzTzg{znrfj$2=W;8_mvV;L={)OyKB?UbGXgb(O9OFa~_Z`dhf^RLR~Rosu5
zJD-OHBMlMS$GbrcoqqN?5{+p2uGBJZ%C}p=gUR3X>M+@^74Jf`c1btXK%I}p&5qWc
zkH#Q(mfSPo0+W2u?6>v^mSx%Z$ps%$P|w||bC=1Pk{xoD(Z&(*34xlW9i27@DWv8C
z=W1JkezSyRBFZvmhhoEya7
zymZuhSMO7&Nq#P3L$f
zzBwGyV=Rg6p>`Rds7chV^3gj2fa)E0p6e^DF#3R(
z8e?viLh!(y=7eWa(C6@z{hVJA5xAXR&n3wz27T^}nnCEeziJg~_(G{QQrufX^pkG{
z`g&iORC~LGCsnB7=vWqRn3}-V-$TD@^IT!yLwlucYAA~|DGc9DUhv!6$SKo-LBqG7}
zsdaH&>`dOlWe28*bL@%&{b@cmWg~La92E~Aa#)$Leit>-$Yj1o
z*YZP`Opo`~(50cwcbh9G&MeLBETc%(+)N^!HI-)D*nq0|rmj)SKK3{{QX1l1iiudl
ze#oJKLA0y2#uEZN$CE7Hw-K7YTHs#ZH6%v}jL;cYzT${@Hu|wV_x!O{ii#HK$f>d#?JEou2!sj;smPBQO#4N0;RN(@zg7)*KPz}Dxr1T2McZ3-Zjv_ytT7&+cv6w1Uj5i@}E%9nUGAXO$g
z`B~iz#<+v3v)PhDQ40A98q|lLFX2X64#M6w)$Hv4jHfj-K=)p9zJxteP(X{pj{^o4
zg48<&jQ`&C%=Wm95ZrUBmL
zXo4NP%0`msq$_jq{=8OD!Z>gWs;t@v!Zm+J7COE@S>(&R4TbKt+O6H7*L#lX#t?be
z$5~NN`b7P)eq+UdtFUv&bX^6`qKo5tz8Kkkq3^@aaSZ@jvtMS_m~voyB2wgZuh-AImFv#*jqwlJcX
zhtD|W@lOR(O=E10RbWG{VjO%c3vT+K5&gJN9Q#|Hn@*L;~
zmlzi)Y$;I)HJB_f(y8dkv4|nV08&gV_CS%MVno*uf)m(v(_rpy18DfdVU#?P)r~b@f)${2%3djsJZL;S|
zrlPGlSN<_r!V7suS#yH+Z7Q?`CK3=A~CY{6YCvbU8oz7Fs@h3
zK*-jHE=EQFYu(;g9yl^T4W1)X#nSF#(DvK&w_*p%Vuw+a;ueG$4jlmcJS2C8tPq!&NX
z8&m)rhfh2N`K`+HhryH+6yVsb@b15VjSc`z&u{1{*?}k6seZvH==^!R-t+?f_prH2
zH~+;i5VO=?g8lai5V@YAkXvAu)%TSYG8djhS?!uHn|6HN;85e{x&SX(UoJ
zdZ;YJfJ%1=_pAmV%m1!fnwtBc1^>qf@c(xrHQd&S}ZU~z5T
zv#2`xQ1k!F#kHvhSW(lW>ij4|B}8n!95aB4V|ZI|HsH_iqv_QtkzAfJHfZl-l39Yo
za6hpyqqw*Lq`NJftg$EY9_qUfgT!lU8Xl1zCc2mZV1cA-O&_uy^q1WKmH)$0eoBYF
zWIM!gFAmuD{HFKIjyr7~FF*f3IWhjrC!kX-*??BG+g^9uODge#&RV;W@$}@8R)zM?|Gg(OQ>vHGI$Ll_Q2k?ElH{9B5{ql8&
zzK-TgaTk0x6$AJ_DNB~Vj-HkLC(neIB82?^X=!=F?A=^CGJWzQTrc=MKmPOV&PZ{h
z>cN=)THF3v`FY~@zIhIOP3~=eb@{z`_#_EEv!`~g{M*OpS(xp*fwlp$ZYjzK!5_z+
z&vWIk?DSP3ro|41@nNqI(%#R+bL5B842Q487Ok*4Mq`6oC9kCszW@!GaZ5u+A+M)b
zv7Z)(f3o#DvjV(7jB{7`ndEYz@kW*UKuhk?1&2A_+$gME&BFohVtsAidgZ^gtO=bck{%zc``XUmsE;;pfhR7gRz?-(?mA~~q;*B)OLXAUf?lre1Ka@d@>RwbOP&m%7z`=D
zyxQ>V_tQaaXRyBgA%*S{ELXz>P$NU%ve9YJBX-2=?Z6%Jsci!wNJ+M7ff}xsy@+9OKgJR|>`XuhlzvKZy
z6AJXW+_HwHvm;`AXN09IL#PF7q44is&VBnlMrqA=jPD)w>2X>;JwBxHh4!cU4Rzt*
zYDaP6_(eGLjbCaW*~@{glkY|{Z*>M1x#U3Vv
z?-%?oC?fQQnr=9FlGeAJ`9ahU?~-Hkm-d%^u;Rj9mGL_y;RZr{W}$hBD2g2;Pr@rG
zW4B&;QNt_do+Gw}wVOd8uU5ejoL$Aw10aY
zOw&6i2g>8zD=bzTxM4NLw;+KcdB`W&+;P*HA`dw4QLapq8d*h2N?;ByF1u+dJb>$r6`v&x@O;o(uF1Jg2>tMOO8~`?o`}zFRWgCP{#n3-
zfFr)JEw=5{S1XDlmxf=@jrxxIf@xl_fVV5aZY?Sab6tmh&CE&>TG)Okf>0mqi9fVo
z3zsCNHI(*)@WveV#t7JT4@GCUR%9`tflEjtEDb+t{s?kadyxyBz3eB|#mkNCE4(OS
zjjhxcC3y|G_~8ih%kdp5C>|XLQA4|Xjsv2bE#W=1i&_r!h-Cld+DgpGF`ayzZB}Bu
z)jtujh1A8r`2}`MKu@*jLtytM+K^*JA&pdZ?6t%BJ<>^MOFCup`2d3kl~H`d+7o
zQu~2$h??>hQ6jpJppIi#LKx{3Cs=F%+Y8Nsi^J#DfauDJ5xB${T;YTlL*`-M(~pii
zF41rL@5~(K3CHE{)e9#YyjV1G*KkwOdS0H-#`)u?4RvX~fvs2c4uT0rqH86tZDA^<
z1MRSPk$YPVQwJRqJwXMIW5pV~h4Knby
zn9$8|rX925Kpj};@@;;wrq2*r2>q2>2)2O3>3c)rn3q!7HI40#a23n1RdxgO7%G0F
zpo8Utu`m{slD9RUEK-mlT18Uq=cz|u9Nv@S6$!a!lsD&b)xN_xNeKoQGFy!uWJHOV
z=t;_gX9S?OsU!8(=3IhG0x596pTdbebu*BWl2w@wAB~V15LQcgF(IcI|DcL{cEpV(
zV{T6QZY0|zYff;K7l03LKu4T)Kpvif(aR+vHDdg06Q_xFbiwYWEtAZrNH_*A9y=6t
z4*8(7@vPg0jKu+uWADCR?MhXaBI4i=cYI%tTr@!N=JD!#W%%AL#>D7gySoHjm!PRJ
zkdbT2tTAuoII*6&j2&j$3`s^lH=dYu
zDf@`3)c|?xegE8?+%F;UFLRWhAxxtsGW4i;efsdIMK;KKw6?`lM;LOfbUl;(YN=%Z
zVMbz`F~NF&(8j-tKyDkcmy@@RL=LMXwhbIv%iJpbD|WcKOCal@Ma#t@UDGTTAV3%9
zJKh)62~eC7^NB`2cq0=KJE6aH$U+(Put>NL>&KYN@kxhHs@h64LmQtWu~K7{VEhff
zZH9j7yC(|1^_U{dlvyk8>i?C#64h+R731Yr?bEMDFCcy6&@8sZk2Trh9^NxTI0Sg+
zs=NQ}y|hjav#h>@)F;9pE?6H3XHCLUj`UDJS?E%;x>J<8HX1icI{Yz(zxpyL?hK!)
z{J`iAf2Cer^b2hMNXR@|pX|sx{a&6ZQ!s-Mq*V)Z}nloX4``_I__CZOnp-VZHYY_%zuf
z4KZ{k#tj}I{;lNrvw!wPdthO<@N>qRNQaW&W{;5C?EtQtWW#N@vG8WhycpnwmO~d>
zvNxqE2l1AI<1Hs7c2ah1%rdxu2mtGz&U*4Ed{+;WjoRu~Dk#zW!(VfR>7_9zl3#K)
zUhQnyR7N{%e%E+yNG+k@?K}$l;y{K0$%EA-Z4F8}JaR;@c>z-FFmEBA!G4Z=8I3-y
zj12=AB$ZPiP%OqeifNBk;sdvq4F8@>&jTHkKZwqiI?}bJw0_OR_fmuFOLw3@(x2!2Y44m&*?B+Nb&?i-zO(xZ3ph>YnR{
z6?;{7TV?!06Px;$wXiJ4&*1_u-XLFr@)28l$`2F9Jf}u=W~_y26ykaHQJ0|
zJG%wyLsUl#@ifwc>ARZYP^pMj*qZ*mkmy8wqqJjL$hzD3iink}=z;)dx6C~p=SJ}G5;!J}c
z$t+uj^`b1aT=7+;IVP}Hy7f>WIdD-9ILv%D>#iW1cGBX-M5K$oD4A
zbkh>?<&@FSjDe^ZUq*Rncd5_q2Q11LUx#fg*Y?k_z3XNrhqyQR%9!Kp7ycvl*Hz~Y
z()HShDsP2sqtOLHZ`{d3A~|lbgtgvy0=d6xy74KpJ`9)1>Gf5A>WAEQ6=5m=)ED{S
z6yLKRjr@-~)1Ugox?BC)T69-&*Y)QL1w$|6bi^BA$&{J#+hPbKWj+YM2XpMGVq)nS
zImJ>74VS7W^?9Y81_5Hmx+7F9;i+RVgs7AP7!H=^wx;p#IwRTV
z&G!ocKuzQ~KSz%u=_?PP3=cBztzfoiL8US!6RF;M0ZZ^Q{N8DPoAl~VJzi{13%!k)
z$OI(~SY@M>m+0e?QId~W$el9u9=b$hWj^#VzMem!3(c%k4AG7E
z8O*y3PPVCV2@hklu(5fGP7iHO{kH_auOnm16kv0zinoM(Wd+F|aa#{wkQ4mWLDe~;
zLrNZ2KeL!xFlf$>h9jC2yjnNmzUUm(hq9#({rL)~By-)WWPtaS*D|tBq7Zq`(q0b$
zgxF4SwDJ{6dbyqP2aP@i>q^FEC_GA6>Oc3m5e%trnSWAmWfda1b7GzN4~jgh2n?(%
z-QSfET62G!D#n9xe3h139I(#x)&giTL)c&H-~`i+p(s-P50cA0j`b%01Z+UwBhaqy
zyfk@`IixWROHGi+lyxz8Y*6K1?RNtHRM0lfBMOseaR-MN
zXBwv^n!&`)FEMec-^aXFSfd8nLOIpxf9k8G@zi2cOg`YFTSX+7MS40`QltYIl3h8S
z=eCji(;=^TTX;!oh;l#kf^FI&9%qPf^`0thDAA9Y)v$xps7F+oGnjsio+o1G*&t}(
zTMchTIDp5sRE3t16-D)RqEpV?sE}kPfK2@UEw^VItJL&n4XeR5;^vpH&z|--%%9Eo
z;Kt!{<3-`cTB}Uz&aY{HrSlMQ%t)VJby>Bj-Qt3pNmtW?7~T83UAHZvT%#vGeW?!R
zorjo(XP+VR_hqSkl@^WGR>)Fxlw1hvi)!Om5ugks#%&Pu;tR1r(;cCCB!OO}@Rx8m
z$|Wa_j%xPCBA%|bD5Wa`*{t)Ss5Sqbg8K;edwNy%Btz7zQDWWc9w-A`_Px|$gcIpD
zpeH}LF?l*@hkI~2UHKLIE5p|Wy?tY41V%3Ys0J*?^CR*pIr@y97=5{jEU_%P3#ghV
zV1vFUG9aK9=`(iRm1Q!OTI-b-KR0o&fVHW4<4bHr1zU@nVg-|?X49^j0v5u*pbw&{5CZ}XFq;dva>P&|$4W!Vf$e?gUIuUfG7SEaNF4w=zL6X58
z!sonG{rsul5mw?Q2j44OAP07xMEF~EN5!OwJR=%TB2wJi3egIfD7vJM32=pcYj!wL
z!+blJrBYMP-YFWjNH0f-d>nUM{Lx+1G>?nYGhp;oLg)-79+~B{^I(kFnp4{)!T0>H
zmho;dY9_z@Zwg=5nH?Tq^l=c&R3)
zU%`vX_3~aqcvcQ1$?{shazu>G78Meg#fOYF*U4!N6RPItkJ6z?^VnhfJ;!1^h*SK}=G?iBmk)?Vu8M``!PZ&T>o#QHPlw-(2D&)5J9GhPtkA{#DQ$lBlmvH5vmdPMK
z?jni)H4r8I1kGL{#D)2>-TX)~&LRS&s_$
ziD2Yn*+2t}q&y3(IVp^M-4T*`@`y^Dou{z4Cg?|12k_VifBd34gck;k&)KOC-)F
zp1;p!6Z4|fj3p7>XY~;-H4wxIl`
zTmh`5h@h0Pzu#=gyhJr{w`RK9M}?7c&@ucNre5S{4K}6qe_Q;yxaX;p0!0aLf=$)5
z>s>Fqt+MXluSyZB*l7Yen?GiYrdpZ+rBl_-=3#Y4X}37Xoul=O6(dBM+uhoWmalf?
zIC=*l4zWprlPcqXWI5A2DAJ{2q9H4saB^ii>c=
zh5R@<4Pgx=$MM7~X4UFX31!1!(vB3!dR(8dkl*_*kv%JQDekB~o(rTptu7J_+!pv~sfDID;bgx|L)(d7~olU?aZ$jE8q?OD-9|A)cB@^#h*NcqCw>PXtm;
zV#SShMVfI
z(~x-=iF75bq9Rs2KB>jh+f6C~ks8#TSdg67_j;=_2em4F!5;*r&4T3u6mb&Phz=FhM@ne2oO@m7g5Cr`j5f>G59}*XszOd
zT*gg@pG6>m;9%omOTWN}!vnl~K}0~VeO70Yu%x$h)y#2rXmrQ>Lw0;H<6(Z8beIU%AP2PT-_<3R04Vh%VVOb^hD#a
z1>*l6EIzTLC>SaKRcce5#}k6ME-kQ|Tg|i1ZcnKFs$tyu^z=#M8+@7e%**ZhEMwe2
zVAK=*crL>Jfuq#>a(2I%m&*FWl&KV;D5o)&H!#O8#
z5B`}dA`AL;)O>fW+8iZ?Qi`cst+l(~#_A5PF|2dW4Vvf0DNP^l0-ubi%pNtUU3hM%
z#~1Zjxpa)?y2nwWH~na!rJ6!!y-b6RXW0@paddNAuJX%n1?Fjqe|Wwj?@o|P3IAwg
zzAqVC>!T!37T0Kr`RPXu
zUaF2#g^poaIuHaE!LELNOjv|C#olONg2aJ{DshOW;cBvliMK!4$|eFvyK05Qo?vA5
z(4*^7WmWTVC9_BHaANnHmzJ~XVKg&7mv?sjQmBrN^uZ|7V`KD9;I^nne=MAwtd%Qx
zwZf!B$73MTJ-)eRg>5*P((MPkX=EX|+eA_3-4%+LB4EDR+(_zJH}22({GpToTbP&h
zCACMp7WX57t~W1g(`~IZtk*t=i#CiPesQxm&ybl@LQRf=K(E7YNa0FsPu}vhZr<`4
z**+SwEd5H8XzVbn0xG?Ez8Y)LJq+5<)
zFOk}DlYZsJ2gZ`G1X_hgC~J0I__)v4DDSwda)PYH&O5R-D&pU~ej($
zm(ik~v!K*QDWNW@4IV?>;B0E4hl1R(4t2Wt0BCltR2S&_JwS$PS7Q-w=&)l|jy+PG
zk8YJ(NFO^&lSPN}GrBf(Hl|)a!HH9oaibtnv~3Xqp-!T@SL{#qmKFFaa@;r(COa>K
z+~lTH&<2}5~|Se}2~dHYuU9_|k1823>hS%lyo
z08El%N7tR7oiw5@)Pk?qXwuOBhnOCuW#u;0IXvmNb5k!6Nxr`FeR8HNZIdMw(<4~4
z4NxviI~T?BE|(Dp>7KcKO#BX%sqWsNkrs1wCqpcBd>d0Au@mu<@GZziyAlrAgic>1
zXwlzj{Vw3T*?bu%DvMS=DXjv@;fWP70kp@D$2pha`&b}27a=yj1C4ka`kkj)I(&jM
zP6DBUhkvV1w9GSG695wt>J!~9hg)I2X^%$;TGt?g))hl<3loSE5g1u+K3AoT|Ar7p
zYgGWS^`eF#5#@9wnVuL`OkT3CXwlbv@$l3KnN!U0kQpj^i<6^OQ)n4Mv->+Ck9SD)K
zq%bShU_$z~YZ?~Ck3m(maq(0ST{2(uu8hm6%qj387cuw*QqJHyZV>~NO_)Z@)Drw=M5xo4pMP9=?Qu@EsUDNS
zvb>N=JpbMZ$?IGi-&(<8rIQJ_=ZPU1d$Dw}l^az3YGnZ*>c^FiKS56bC^8LB{<6_0
zpMutXkBw6D;bt(!thJ#S^Q_JCpOUM!Rn$L?_|f_G$NUt6UD^!P1meQm8BWtDkXw~h
z5`flYVW2M(0mim#E*oB_KC0KvQDS1yCGyz_efr+kRkc@t@sp3u;TR2Y5s|#FEOteY^*T#L$MFf)$03+xThbI4ZBo0x$Odsxzj&E2?7P
zZ8Awh;jx-?og$DaCq7wHO8s<6x30youP-Lg{R8?{@w+O~VqZ980UbkVH!U!`#c|7T
zmr!9p;}ZKm$WhQs3$-5hr4cQC#yq~)Mlau(ICoYnH#=U{kMA`VK<;~!64xIpr?|_)
zap4l`GJ_F;#>4-acgJp0Hgb5(G1TMI>e4GOT;G7^$gSkxc=x7soCD$BS>eq{pM0d8
z@(YP2;G`U4<;bKy_Mt;{i~i_HIAdR~W9-+WM8IH+E#aWUTu(d!)|n?^l_||<_E?v4
zO+PU#^i8BMW1+k4a!qc6`aRRDuSdB`772^-=l=tCK#9LqF>+Yoq-ca8|sz#ME
zLLh<&MWf~&XR7*54P|LRmc0ORzXKjxH1?ZO_O<_1UQ30g>v{L^puB%di-%zQLtfED
zev`gmITdkHqWyA+FLwVH0$=%Nux=Y{A;af12lnwmPGxm%cdu=1<2-72MZOq3IsN)vl0`X#H0!xGp66<^|cfd2WTo@qUe
z`%?ulc%eM``ue%LtLD
z@7wMfc<-NrD9O?Na&PlA2t-3*M>5PsLMj8IKzUmI0Wn9(*9R)Hz$`&dB#v1MP<(^%
z+}(usex|qz5AT0OZPF@Wz7y-FVDROppxjT0gSGGomRh(6ie~eUU=E5uaA^**p|pm*
z+|$~xhKktQk=jkzj1}hv$nVUdt!QrlkV@F^sUynmR(|_HBFHP&X@*4r4rB+5^CeFP
z+v+gp1Il~Jaksqq74PPLLqqw^eVBeD&wGFgAM%LrYbJk0z-y8xKLk_f$oH%oCr(F|
z=W2^nc`!Z_!%_?~>@FNR-hO|>N4kAVrxDp3lUjA;A
zJSr#t<_t=D73UrF%L@H{yV+keFsw^x$uGb;!8gxr|0KVUZ_^WxsSZb%bWCbN6LZJBWfwKq`yUR2=THE-Y>-!
zjuH#T%nk(8?!FvCFxK4T+NgcW&+{YQM+P1psiQJb)8}7jhOY3j8-mYy{7~o0L`_hnfAR(+OxrVtbV|9Q1iPiFf+)O;B)|w}c`N?$hPN9kDXB>3P?e(au2Av&>Fem%ue*O9|DTkfKkxqY
zllOV`clwo|=imSN>0h+ZBuFawU@tzTfpA1{)RCtBx5Qz(YW=xxjr5bJKd08E3(#Ek
zK|_C9J2+1&Hn&4MJ!s-;lAUJEs=4mH>I5`5gBqiU=*I1+$0Ynk+(#SR{C3q^bzSwa
z-Fi3P_@VpJyVi_n!k}`tu^n$|ZF;`-u-)2>_qRV?{zHsFEhKu=H@em3_7~dguQdHU
zkoHsQ^229V((KpYsmo8lJ|&_*?c5u8}w&c#%9O#X`5R{o2jvl|FWqCdzsH;H8&24;Fn)DGJ9*bVp-xgW+N}E)r-I#d{8=bQij)xCsOFR-q
zNFDl(l<<{0ggOb5Bq4?9!OCw=)cXaxl6bC+a%vTzS?lO>WZHpv+$GH+37VDsgn)n3
zeLuHT_s-Ur91t(yLq&WNyyn8+!rAz2O8vG9us#kZ=~e~YK8$W@1}<$lvF?vrOh+W?
zxGO|2_=dGeL?AQDB^`>P25;a67Na>^6`k5}uif)CW+A8pFspo1kSlGy4rUcw+E^l-
z*N2(DaYUb6(VF|U%Ly+RA3Qibj?jMsF-daaETeMR@-f^C!fXw0s35pg{Y2*glcHJS
zG0=3O6y5^;#D^-l)z>9AG$Yy%y}Aj64JNXA=bRRFdc&nasvJ;^XiGX;Mo`nhO@ybL
zc;WIre4{j!lInn7f-1QqRE49@K8{H8qIA%4|AD=oIyxCeSnHuG3o6nBV?cieaC4H;
zepm%q(bsOOkJST{H{uej2XIz3E9MG)K}CAtxP|Jy=4xFX=@=P7A%qYb(F(u8vgHg%
zQ_ESjIqUsaaG1Td4>`HftmN7fo3}^|FvGz&YU(Bn5Qt9pvc`zY*T_s>a1HlwS
zN0BBWOJ{$K*{8TRjhY=y8lZp9ri3kea6p$-QfnLesApvq&+Uj0~iZ|06Ydf(osB8u6d!K#}t2+56*BLkr2N^
zR)F$^(8N+6kdqoj^B9OH$7+4as7o3)O8jl3k^0RWD$$2?4}5>T(5WI0g24>I
zm$Hg^fi{+v={+MFe8A5nmo?~mJAkfXd_(S(gOV74anP3HB2i)M>W_y-f$ouOcE=ic{8@v!Vg#xnv9Y-U7)GD
zd)g^Cx3Ya~IQ5(kB^fjo6*8CxE32uGlnK%Bhx#N
zmo6`#N8K|0aiP(ecJId@Y2nA@yQr#QZpd^;KnLTu9}&dSCFa|
z*s{PMo|=EhY8w60bfd=6O*5d%(_g0PE>ojcdQD&IB@ocME{=W}>9MSx8fx;Cmn%9h
zqcjAc=hJM=JnytD>t99TDx@Ke8h`s%;JREBHPHpX6*ovUh2LRA7CX7$SNHNI3V-$e$)VYexYDL-N%a8|54vpm-bhhN(@cGN0PiD)Y8w
zS*;}&(%KRWX>EyxxMhh^>&g-%GF;b1KeUz@HPl2YFIUuCMybhM=2LG+nRi;2#k9m?
z`64|8B0SVA1E5qFENvKm6F8Ha(%L>4zFEP7Mnr
zTV7tkTRqGI^OpH6aB!KoEz7FWA~nT(v^cfhp+#z={Xc8r9y)i
zr-qs+<>iWc%P0+*<2_nj=AD*hvCtxs&K6VkLqRTdKxNLMi?I%NAd8_HM3R3Pxrk97
z>tb>G!P!X*ePzn@>Q
zcKpiLDdfpTe~x%nk7q)~_ir?2aGxFU~&OR>NsuA%k3)Bk`phLd@rj%A`e
zAdPiaUDOrLDXUH0MTg_o2OfW@E2FAU<3OJ;>v{vTrOa_t}HBE~LZGGWZgk5ZAxekkv<^83{6iLv93X+pZ(
z^uFpv(`NS(mPlkSRK5g#`wXQx?a!!TDqUPNKwAay9#|u)%>}~$4}ccZ^%n|dZe(+G
za%Ev{3T19&Z(?c+HkY3^3owVLRRxEqRR*`GRR^7Fm*3I{D1SCMGCn>ab98cLVQmU{
zob6p(ujDpzem}pWp98GS`vnLBc*bY;d4mCx_s!aiz)rBv{|8b!EviXT->6E>?lUuM
zk71vYj(8~+tG>FC`l9I1pEUg^Idl2zZ!iCNvEf4hdSfp!gjN
zPSGptQ+)ey{eKI6@F!!Uoq}?6D?j{MU4Hq$mvpfzrO97@dx=wi`^V3Z@}#pemq_3H
zY;?Lli;>>O$%GgKHL0?ie-4+wzJ&Q*&@_;@r}S00gt-mteY!rQrqjhwbMsl#CvHkA
z2WMmT+D*nNn_{k-u~W3hSd*F#?PQ!&F}iAGW|={%W`AJI1poTdypC_ZUz%p(+`DNy
zOyTV>Z^OL}jh@nH`ch9Jg=KQ-N{`#z(@csRdCJEX9dErf1YgF}EX*?Qbeq=SUw(f1
z@cHG3PwJxG6#R9HpMQSQg>{UYNoOyge|z~?rM&;Q%jf^55eDy_3G^$U|La1}U3z<-
z<|d4$et*jgAD0hoFiN|0dx2W|`1agQE)vHX&*&mAFRYC^+Ug6pyhz?SOB}zx_yqdU
zYIzYgv3fSgP5JAn@)WFbG2G_oeq6pVF=tct;jeuVxoy@iIK!Bs2Ax3a!lqnYnljM^
zpKdRdeS2YvMWRkQA5TO+T>J%M21s@|al}5}EPsPi(a}uOcC@dbuNy(bjo|$hh%~9*
z50Gl=!!i!qn$Eom)awiD%Q&15-rc5TTv?ewa#!C5Xs!#O)l8jaZ!gqG&<7lqFAI=U
za>`cp3=2JTazSUkr6-AgqK9*COSZnLgIo*??7NG?11kX&Mo7p@?rB78B96CtrMiz5DUdD(;6syD>(h&>1wq-Ne+34lhA
z1y@NyTTz#Wwck{z^a9>F^{!1e?=O_8=YJ7F=SN&}ESm|WXl)juyr}n{Q!op`9m*no
zEjQ#+05b&aBNeicG9nDIh2Yx74ua3mHV~Zj<_-j(pKc-e{Okt7waYC8Utd=^cWtIZ
z#8rnCc3wwTVeEAVwoI^r;M)1QcYA4?nMvDrM;pNE+ye0VDS?T6X(r&x)LaX|=YIzc
zah8v3Dz3dW+&b?hx+n!5832nj-H
z4)GfVmGUos1_FO>`om7a1TC?`*h+5Og8@c7omEg2>q#
zA*C~%1+kzU-_(5KbGT=v5bA;}C0}$*ESCA$Oi&Ek8(NwBESy?g>R=hHh=z+%u$UDw
z5#@puGJ|-6kMbq3SoDN1p(aT(d{xM^F``;<0F=HbBqo}q2?e@Q$A&Qz=zmTSo6dOa
z)b_4pVG~FY>`s7tXK{t;$$Bzvd-t9MGrcZkk3?x7y15_2!{{b$e9aa1wR}o%@@4WV
zU1(YD<=_8&**y9Emlqw0@G4xJp!Uel@E1KtV}3sDy85zot_z`UQd(CmsdE{XdTW3N
zbNzwFaL`Zo^nBBgQvMj#u|ACoJJ#p?bZdRi&u-SIQL(i?*VmPi
zw>DFmgH?x>*|?6ZGB4K|*fPP!`lL=e>vMkATAxOl9qV&`y0t#%IL$IO(^{YNgN8U}
zYkgL|G~~7$>vJ7*x=o2^eUe@)3!j&*G&`u3-BD1THHQpC=)cDi7=O8q2@PD>)N3)K
zNe<^a)bThQQ_~`aHzV^#=sVqzj$>qwv@7s<4?I{B`xDt(Cor4b(JFTsf0dDzk;q;g
zn-Q;*+>|`c)Qd86wo_)#^~!Aa_-&Ng?C$H889nWk89l3%nF||bR$kXSqcl@%ms^Lm
z7AhmFwN;scEfZ{&nSV1~HpBdEo6SIQdMBGzB&1R^2X$Wq*
zY=(8*={7B215ceE#$>FA-g=dPz
z15#uQi2;HuT|$k-l{9X3PdF~1Y=TaF7(sxG2>li#QJ-9B;bFqMQls2g>Ux6qc0x6o^
z=~y0Mn0iwY-hVbx71Y!T9Do8FC-|h^oh~!v(ur`L+&{pJf!Enn4Y6hi&u&vjgH1(3
zZl5MKzds`+h>6HbUJ9H<_
zr)2XbOAPmG)58AKTMK`G(Z)Pph`;CcxvmOHug^C$_Yd=(x$g
zg{TPgUVmsyP086?jH%|p`wD^tBl_}QxsA7M7^tAIForLn@r4Aw{;7x&uZX#BUud)R
zVWV@_yW+cR2AAQQ_79C61LQLlQ5v+zU%I
zw7%(qf4tD{o9LHSSQ;a7XjPga-@H2=Q8j1ftAEnFAO2Hae)`|HZ$%}0#XD*y7q&{u
zYhO(JyzB>}3{7ZmeG$yk
zxgiOkt2!9%moGH&B++@heJc&!&D+N*E)nkA{5Kf@d-Mvr{=mj%1cc$O
z%YPNQY-ef8Cblys-ZLk+i?VP(%}^_2tJ(xpHgTraGQly1>y|IP(b3s1kn1N$QZ)G;
zDYbn8XAU_6l4SMiB%`zQjHd8B7W-iiG?4X>IpeMBvPiPUE*YgKJRaAYjAD$?Se;`p9o$5mOEW)w4kx`bgCg=}la8se3JtR5h)Z_h#ecRb
zI0;n@R&P=Yxl3f<8SYhPkVW8f4t+GJBRL+_>(YgHhPZtvC83*mD3~XBU8nEBzTx~9
zP)F-M$9C?mnN2*IMcJ$7h*wC
zubxV=bItHc&RYxH2e)_nuGhkGc7I>Oukq~oKF;)zr|J3`6ca{kXLL^$G
z!Wq*8a)?k-qTk1|3w40j6ien|f>y5d#O%b{c&g6+27-C@F_@?=YprcL8h=E^MCiFI
z^f(}SEr1(>#M;srr}S8IrS2Q63pfq5Oyjg+!y&X~7=6P&Ds&uGA%EJ2+%I>CAxZy@t@LV7
zINp8?97PzJ14f|_;V!&{T_GXwD|IG(h3-bU&7kKJAu0K&ustZ>1W>TbjnP`F>%_nz
z4H~efJJMmN56|eM1%JK#Voe}-C%}2E)x}Ch?KDSW
z%Uv=EiXXIe1jR!+a};V{sK=4N6RRx*Q>0)^?rLT2d=^T15K#DMiIdt=B4}f<28r6H
zj}HY_XaY!8eIm4RIKI{xIU(+H%H5p(q*-2UdDsBhrx1p~_oiU>oCp(&tv}X|YH27oT>4Yr
zc0qS6NAlR!#D7bk*6G%eN!9c!eF`@@%=nXM8?pimyJbHJ
z%?2x3lobJ;<~#-Qm_W!Mb_h#bo`BI}%zq-oXI$0*TGnTP7pzOf-ez&$-HA98a=>9I
zfIud8z=3JNL`0loB(_~(-=7e(L2wNy4NpMx@S@lW4FLq)WY8EDt3{YIA%>*lG+H?nN0X?9r&f+Z4xG3Q`yQ9gBhD4M
z&;8h7cYk@2jW|&X`htaF9%wBsRqt-{VZNj8u02_EluI1EAunGN2fbm|wAc}hJz(O5
z309^7D+l^SkesF1Yt)Mqm*v5JdFtt~6_*d{3Ar=!;)KR+RcE6NtTPkuOH4vB@Pr#v
zo}Y8yu4&;EOFQhH68pU1R0FN=F-Un8lJ9Adj(_x|SFR<0?3u%K`f~Qj0elX6y(w;-
zLs#4q#3527C(d*^;m(zpzsbFeVq7+^^iJ-s7J7i+GXp_xGmMNzj|)%mGW9rlSmH7)
zsV#v+)KHEO%#}dc68h@I!Bu(MdBcRRK4np?@=jnAvCHv<^BHCvqnW#(q)Yz969)L<
zlz(O;yZ%BpNl06qF5al!hOOjQ5mZcwt?+Wl)5U8wpc7+jLbwf$0C|q?gJKr}1>x)J
zKKTSkrvJz-5P5C=eZ>YW1LTAxIa^b3gt<*TbXjw(fLh72G)JxXg^2-ono5~v1was6
z(pfr#0xB9by>U6=U$C>qhg8TbU(pxPjejm4lo-BC&q5h#VTs2I_Y1OJ-}S(nhSiee
zc(Vp(_KoX{}(I!S)R*~H=D8FP*M83h&w8lEQ%nz9_%ly`>kepESfV0qFJe}A5h
zB^}uT&i*so0rIL;IZQ@fu>2eT*sThkq2j~#q$fwpk9&kmEzfZtm*fH&_Vv@MC3T>U
z-CZcaPfi(kVk{=QI5+~C99ak0kfIYce2{+xb%Eqc(6)$-fwLuYFJs*88jpsNG#awJ9Ut!iGe$eEWQaF-m@j|
z8(!d?!2IxiuV@R9X}E86tsA~>8g>pZx)WcAjPH9xREJG1MO1lmA*Q5uQi|9_
zG(~6|t6A6Iim%MWD~|Z{#P#uT-tewqxJ+;!h=jaR`Ch^bE?nf%3V&XU+iU`OK{VxVv2XBssI?9$8>}!X`^Jv&Q^`KZFfZ&D8cvPtK
z6E~078q$D^=-d0D(SP;Z@G@=ro!Pe=Lyy7Q?Kt&%b(^bbk$S-d7B>YI65dr%FtRR#
zZ{{J#Ci1J@o8@gTokcT*=<9@%77GFK-N1N@!1Wl1H3sSmQ(d9yQ-8>Pzp_w+$(Q2<
zY(c^SHzk*_V)cn71Kc)9sO47ZwuRzm~ie$VlMt0f`9cPx)dshK)Eeh(WZ57
z$JzHte#dx`B{clrQF>cOdCEjsn!@#=LBSHP2PsD#=s8FD!BIhI1h-)&E^YGN3FJKv
zc$jWM8=*xZh#Q`*TUuh>r(9|@wTy}dfkL?&y|ILX1}0Md0`3uPJJq6wTC2S4>uuVJ
z>?qyok;A$8S$_p3JzpGto0EMlA5Yrk%QPQ_FlBG&Yveaie*fhq7_!{5jc-lFiw+b0
zs4u_&?d9i#&YLew=h}fO>C<=G$2}R!Hv)NByFos|KAJVm
zOOnni20X&BTkQFR6VV^QQp4dcJkWmu-F4iJVAOo=eSeB~Yy_i?cnd7w95eriv&?Ty
zn$B^)v7}F|W^(=#IUtLOWNJtZuQTK%`R;;WyuL195F7c_U-Pj877*{@d3dmcu`gU~
zV#w|0JJFDghXSXk+Cm
zj|D;>IqbZVyt70&UR=ly;>?8;{v%}O%QF-jJb#*OCuk04o>{_79I3~FnFpm{_`O3s
zJ6Tfu7j%_bC3j_F2P+JF$%$BVqVXIeLa#G+E{g@wP%KdI<;QGtTi|V{TISs{$kkVX4In2Nc{a_Csav}-k;?KHqq|Ejs<8>-6c!}v(e^|PP
z&U?L1NNV}%O!6_b-Ea**ozr~Idu+!zE;;K&av$6QaPLElOm5^r0sf2}gmKB1@}^Ei
z=b#}ax8md$%RniO1)tozEp|hm*at}JqJR9;cjb1u0CYRuAe!sP3m`5cu@N!-RNdU>LM@Z@7(=e@VZ8GvG;zo!(h{bp)RaKYjfzdaQjH*I_7m=H2vhbxUee*TXmKFd
z7o*=38a(FKj8m@ruz1-Xd}bCkzsab9e46$gdO9a8u)k3o+|RRR#-R;XEM99aw!O
z4v;^}Zfr;*UqM$j)nHb!i+Ui8f0*a0iMxuEYH;B+EBVu)9B7_hHO>;NV}DU9b2Nx4
zUV9&$iN{RBt3Br#1Y*S6z#U__8>S(vB$##f&QmNYX!nW}$p_YM-|J3coed)|mci$sz)`U@2!E@D2X{X1u+$T4
zDuf+`%zC
z!#vQiP>hh^lmTim2TduY=SJG&+8X+xiPE_wM*
zob$Ml3wHj~GV>IPH8|&9o~|@Jo9Dp0ctCuhAojI?FO^YvRDW=#?^>lwM>!1Q6C=rpKM0W?U?F@V9>0
zfY6&MwDkFIYJbj9OJYGX(98dfbWyq78|j*Npz{-%j^i$~I1z+@&}cn5env^67aL
zch=6yQQx?2TMJ>b0d7rcdwZd)1WfcC_aJOY+yO5c?7kX#&D0??KDFx=JKxmVyWiX;
zHv?UUe_-7KSoZ`G5oTaxT0h^lIv*umZ95^T^^wqBpnoTJzV2EAYU?lKVIX6*aK)fD
zy2-muKWtDN^JGoZ4Ta>e79Sugb|?-^OOh7~p0J$93D|L`7eAxS>x2m)za~~J^`GKm
zl^_0p;gZMw=h;~0|qGmcTk>^w1>Bp?0NZOEg`$hCwa)$q%HiMd>G+aog$}&a
zTz=gAiL>tINAurz>?!MF_!F30PF*ajACz-Y?|%@U-<^dD2i}JbYY3Ueg+Y82ED}I%
z>Z$l-p^*aUMAj{<+Vim~zbs@H%jG>I&vg`Ijr;Bs37k*??@R>V&E?}f3-6kqCckx0
z4$=pnDaqTO&yIibNTLlx=izMmujK;d6V-aWJE80C>{(TZ5PgYbM^Cv7@J`G$u#Q-M
zH}+>9(7_Q%d7EKTqN;M=N!om
z$${J;zI}%0m|~TQV-d&0+&fp?32GDMXT`{ma~yeDrx=*wd5CF+`3Kfr%P%PspQ-qa
za`zGEHaext?K30KrvcHTfrp6UO`FXBLY{ulc5|J=lXk(Xid2RVN?
zHZ(atJ|J^+a%Ev{3V59DUE7Z2Hj;g>ugK>CtP$@Nuvox$cl&u44a^73?s%~19$;qv
zAJ|Zplu1_RX+@ABWmdWDhS8Q%q(o8ViHH*?%xR>5e=_u+?Ct5V-=F^Z3`1nW*I
zPD74qvZsIj@6%uYbovLK9a0!w$mxIi!t?*o1wXmbyTK>>+{~YU8&ALg`{|djPk;V0
zo{S%2nCPE;`uevgbLrF@Go8MEfBKJcwEjO&U;poE8f>IiO{TM7|K~&}f_XlXhbd=2
zI(I%Xy5|#a$YU~jn$9PDUaqiq$~Ik2g!p_SQd2&f$!wp%O`*FA>G?#UE4+URCSIEQ
zb-CS0Ghsm!^VUuE-|FF#7GL
z@Q}bh(2#_eAxQ(Z?_Ela2Sm5)r{@!XnB18B$QUd8W$B#SG0~*$6K;GyVccM{9bNL<
z^9RpSG#lhi;TiV}n4|C&;=6xB@3y=vORr~_PYO0w-foy8V@Ti?(>fLHIAsTh#FlG@
z;;e>Eq3M3aWZ`$!=ySWh;aTxA%6bkwuQIx#e87Kfr7LoI1%88Q
zNUEl+;S9;o%kndYlygYi4+sa*Coop!giBM`M%XY`FP(q>%7B+HFYxjQ3k`%6<Ul0#EYb|rJm{oF&1s@PD)(C>&
z4H^xJbDYtga$?-BCd`qWy>L%QTSWsrfYXF!E|zbqp>y;c4Y)yX<~r-c+Y{pHHbU=zJm%Gb-FZc;A}>4F@fJ&IeCAxxxJ-UqYD8kMv?GE
zZP)=Ugh%adI1ufx%Q%j-v3Y+v(O@u&zjX+)8v2l9YY4HRDS(g^+U)OY-eO=YH;Con
zS?)K1HmTUn)8OJ3162$Ok(ncRYBsni(GJ3cw#bU@9K^krMukVEBxflIP@
zNR}txT&rD51igPJGRTWH+kIXy9*mt%L&*6=V8WgL`?u4pBmesS$z?l4Khl#^6ZvG~
zkiyB#!KuG(bzWY%c5c>r;WAkVg`eL{>s^h{4baWJ{Dtmeqn~LS-1MrS@r&dk=_?ov
z&7mxQ$m+`Nhy1zqA-$P~$-e5xkGyH%ttb1|nfgtLZ8Lux*UjFUV$=@H=#)!P%e}9f!6K%YNrM#jte*onnf{HETcMS$X0q
z+HhBJmC-{$Klp4}5%|g_C;vKOU>C-2;1zJXmk4NNEoEYFD;`BnSSXK=OFWyjbhKE>
z1WK~8VQwIb%*V@hVbgd#hv22gqcuZtue#ydjy8V|uP<$g>aFjLW}eH55xkQxYnV9z
z1MYG&RLo`7^vLS57v)Vgc(vh<)rqPq`b8fJiw6&pX%4MTI3!|nizf9$k*53+S%0v|
zI*MA#9o{WbZ@BOCjQr}p#C#n2_+kg=%Uw-_k10hX!(y5qKy$?}uoI>G;*$INK3PntP7Yz|&nw-J
z4S_N_Yl9WOi=2d-%0MZfV=pfws}K2eR2F}jbD1%^2r?;BUn(D5%+eJ=0w}JZL;iiV
zY*P3PBnkB8K9z$4eVnDkYMeNJNl0H@ZkGkv=wUWOvvQvmhDJw#DLvj{gX0eY2RWw#
zM(X7pJNXt3uaIvQN2Hx{nBdjmiX+mR0o%t%`!N(B!G|C1K7KS!Y}@C7uF~P<@)mz^
zcws)O+#z_^1xqnysLw}Qd9rvu(#rGp#(bpM>DZ@^pzak5b_H4DWJ-C&2iIi7|=;vhsu;#m!qWvqX_#bA-GG{M1+?6Feb+h7Vo7dCR>p
z@J*FY`QSK4Q#l(dbRy=U@=Q`*lxBY?|Nc%Hp1VcPSz$AlldEQK!QH+unV^i>DN;?}
zDg#YD^Sv714KFDRO(nA#V}#4V<7X~bIS#cW&z#IeU}65}^`JPj65s6Sb*>m6U9?wT
zOgcjikIr5#X~qsVBwBpp)nui|^r6zb`m#L!NXw?X-jHc}$0^cUd-a&qOR#?_6$5tu
zAPf!I^`yAb)H3q|J);diUGCQ|ZTZ2uFuPb^PrdoI(7ZQSla+cQ1nQ<#@Uk7a;a}iD
zM)ovzA=Yn@JEdY)qD6Ws3c-Ub^v>B1a?*bgq;bkS7pFVO
zJI6^FdK&ABa#9VnNC@e6WAK+#PR`0~P}Ec;i2%=g!$x{QJd|mj297MdV%)RAd)c4^
zCHDc#=QYN&eY$xphk1h@qkPWE7n-}m?*iz$WP5m-7&w+PA<>794*|~-=zta*uGlp!
z&)DW6`O0amj^tUuaXNpevS>zK#c|ArBx49SpkXQZ2gzMBno=|JQaXA}Kt25?9s!D%
zlOv}t%h|h%6pp@7yLTK+@lJLydAwPURdGNaULodR%H6vS>VG>qI5=rS;#nmJ>Ci$r
zLR9GxIrCqZSruucb)39hyw=jd}x&j|1{<{(ufcZ_XJp(tLl*29-#J85(pY<-n^!
z=3LH~l)KRsQ?Yld3&F~+Y2Xq;5T};`j|D2_&OHdW<%CcOM!R^pF6SQn(dowuK^i@)
z99%o8;jdEW;_7Rlg
zAI5#xAXZ!Q$Y?5d$tUQUor=&4M?%H9$SpC)1F)AoF4TVz($&MMau;laEYZd^VWGHi
z&JQ{wp%Y~k@rr8mV#D&f9)0>Bm17$1+(+n7>nWmq;M)C?i+Gr`6qRXcN*>^rQ{Dcc
z(ZgR~=7=49b~eQxgQb&)fD%=iDK2^?E*oz52v6fq(L=O1R`j<9)jk#jg9p11D(TFk
zsigUK%A$X%a@FMyUWnin)3P^aI}3KDadn=F!=rkdQVpCKMRD1C7BI#@RPGm1D?icj
z7;@R*$LFi@ED3bQ=a-2%o3Fu}SSAK=#3OkeEY_Q?d}Q)+m8{(>aA2bSvUE=E7+*Mw
z;0bs30nnO1y25s@BI3@0FN){V>nqvFGLakdF&=+2?^sTelz+G5FUz}i^x=9ARFsQB
zVjo>K<;KW>Qu2ar4J88MhW9HNtjawZIJCnpY2wxdj;g)}M3-`ieuTHLgcD?73O60~
zSSO9kmhdwa%Q--sShE_kUWT3=!OBHvQQz_s8j1t*5qZA1hmN(O%h^B=RE%gZT;0d|
zc^rQ!<2Oc67Yh7mF>Y
zw*IVT5Ra=3Jv#>UgU&TY6p6J?cA9a#JdKSR#{zgZtv)ny`lDQsmsf;trvR9-%U!wT
zy$c+~|K2e#&ZHER_AvlzkV#YYYREO^#1gD>2Wf-Y9LuG`lD*I@JNO4KMT-KyyfS}@
z1{EA)Wsx@IF&rSXhXZ`jqEQr7f-!LW($qBAn1U%Z@L_~+FaHxy^y6)1U+#tEItg;9
z1n>`L9LyJIDGsqdPcQJi`KZoClk?yq2ftM@V)M~s80bE83gr;UbdTdSWc#9E9DF+E
z!Eu$3t1H*e&ChG+Qi8!R37wg@Pxn^wHzob3b#mC#Y5PE@`yTiQa~D&Jd6o9b@8w(IvqRnNO)Q#kVJZf
zWu^y6FZ=L+(ZBzEa`BYMFyu%^-|74Ozx3zZQwY=LFP9&DtAGFP>F-b8(<*;6MHA@v
zb9ge-%=`Z2!$`2&n_L|IRvt~`V8dK*;#=9m?W08t)L)3SNn3)sb$?{GS|7rz{?13s
zmM<;bK3eq`#yFUXcJZpeKsK+X?b%hoE{+y07cJaATJ`srKb`(T?`23~bT353@pS$l
z8vUQ#=-oiJwmtpzI|~o}`L}=Z^!vY`f*Xg)8anX|i%dvEFkuc`F4*WiA?Q!A?v!RY
z^!%Rb^lrEHV-!@~^cfDuWpdNsB5i_<{SX3KxFyNe;0|$g$$hS37Moj&>=_#0U@Xh4
zt%sX%sgY00D_5EZ7fq&b!DTZ{CfH2vWLn~rqq$!k{Z<|I^xed{hq8Zxn+Gq~|DA@D
z))$xPPt~2F`7)R6qkAfUUmZTTUA1xhXw~IJ%aOOS?r@^%7Um4|s=wmsrQyrWsBk@I~g*TxTv#Bbj&;892$B8Jv+F
zM)#2CpP!F@tB&R&rMU+4x2lEPM=vc9#_^WcBAe$sC^|GWxov;)iDq{B`|9we@v0GU
zxau_KA^594<48CxK7*sO&+>55c+tk~qh*hFbOUV`!B#z5TG&!DzJAE^=%wYdh1*BV
z{&s^O2J0mUntFI}nhIDumZft?1hIVzE(p79a&|+i*frBgafu@oRv;VB?
zj;m|DDJ6A^c`?IG#;e4SkEYVHxY5(wkm!rOa!GI?BkdHcgQLtyiiMvJp1}GQ2V^lr
zxjlLAI5vNqp*u#g9>`9?qn$*a8?r<23;84-6M0ZuJSO;rSb&5_HmnMpu%4V(X6vJZ
ztibC5^LU_!b}Ly-!eTuj?^;u9tbsi&*6w*s;!cBXdtV+{#W3xYOk!22!;@D}3s5ml
zszLcup7{f{89Didb*}ve=ysJ;+@_=Su!;uczW2QdylBD#K?HiPiV0jUU
z@K}F&&z*|vNSyH#-@BK_H5>dvwQUaYSiv0{FLH7zsfM%|G0IX*YnLaE$w@zO*yYef
z{(R6mEC#&|o!?crNUQ_R+jRWI5fJQS-mx-Pc!o~kkP)H(5bu%iQbvKL@qM_A!ow(2
zF8l4vcfk5`pu{)itm2$@EV+ZC;@r$!YHxog&djm)3r>(_=7u=4+P%wj6ZvFjP8TmT
zx4Lre+^mk0t5ktAolQz82zeZQe$@{*WjAD+SaGI2ls74Mc0d(jMc(n}m7Y%MIn2ol
z+uNx6&n
zgQ}K(#(47gqFJdZM6Fr=xk0vHFA=_hSpsA8VmkzC3h@k-Dh>^;3b{L3FF$6t`Ciq~
zMIJVJ!4!E-LwU^-+_`+Om;1%#Drm(6P9v)}8jPO6caB`44MQSNf8@uslMn53AG>N9_g5DBa6ccvs8G;zS+i0xab;&c@}B(D)JzuCLyNmn26id~a$hJXcfNO;f8~6a
z2l?w-H`&VDpHeQgJRsZ{8aukk8>z3Dwk=Iz!)IVaO9Nvq9mHME@(#8fYQFl?y_wK9m
z%?boJl$$QM)G2V>ooJTWxHM&dok?NjB)P&Dg`T{*0wfERk&Uo`GRR7oZ^R8d0!$Zf
z00=1D)T=PXlIT`U7{)2B@=6c)1ul9X-3FO;#bj4CHOU5UHp}McvFi^=|K;uKEJxwH
z+|{8_XcdSSCs%96P#XY0geQ6#2by#qR{WyE+;WXexUGxJG-HRSn81dA*Cn`8_r3Jm
z`6XsDY*`g7sN=^K>WH~laN%zlEf1@j6v{`gmNDpk0`Mc2LQXYsVhI%Fv|f9H9U%r!&cwdF3!mo&+mr_uo^&0!l44
z(C5VO92Y==08Bm=<$E;V!8Qsh3F6>Y(i1I>U7wB^Y_#x%cOnRNh5#QEFz)0Z=4DC1
zrXX}c0wG%wnO51sohl|A5yVwhplc04YLq24Lo(L+=@e(xxs>RCC*PlZ9wL2v#j$V2
zv1xEA)7^ZluD*Tray|7GW&odT)m6xY_v2i6wES^-v}pL!z|DhYXU5oJqPLMv)twm=
zy?yig_toJ`+f^graM`Ifc^K13D6;I++At(~}dGykB*~0CkWq*5Cm(R3h(DycE
zo6SQPB7MHsb6LWFxP0RDw@3>xV=aahE_9;7060XXRGhZZAfm7o@iR2O!C02Mx!#_Q
zbuCfxSNZz=IN1gtRqr?KjE}1KnpN-5OMS7L-Z!rDQTG1YKG?JDeV)>eW$&Y#Hx2LW
zviFyx->ReZR~p8;z&$#8X`w26Uv^m3w=aKR9lkVPH3ANQSDl`V(^q|7>f2X+mWPYR
zi#Bc_Eqk;xUd?H=Rgcz$fd+8$)vwE=mzK*GZXYfC+r8{PePt)>2=P8T+PvIw`Y3z<
zQTG0$?EMzrY>-#n7{ms2L*~s8{WvSo7rWes8+Y9^&52C$G&efE-ZCe@@{Cs>b{=S|
zOyd>yxKUnz9F<*P#`spDc2uW+K_xdTwa2xOXnZo^%FlVL9y=_qj%%pn;_eNXoRO5F
zd3iiTfW^9M)$^rN8MspflND*hqYI&S17THgz{>w4cw*muEu0tP7F#r
zej#%3DK8%&9z+~C-APZ>o&LP=tglESU;o&=jWDf`PoC=5|D47@C!xQuVJN;XQe{{_>CQG
z^DUZZU-EOLv3Q-bqQYjT3OYhl&uthleVm4<(1YhhNy{qMONZQ}Spe|mn3@;m-Rb`jO
zqTy8(k%zu`anQXg!-@6H@s}en=hk0;+#!r^+AQt>7cyb)
z`%_Ld6R+Zmq+(5{Y?V+taXzYsAf6m$9j<250RH-x>$j$6qUXcLLT~c=jN$-oL+C(J
z%XhCj@qLmo4cB$%Qo&D)@TCx@i+suP_a=1P_2>`vJRG+|AD{lOch
zV!oslp4`>+?G!ZXt4G=pycD}(8gzN#D?gw#fgBa$Ef3CVxQZj!uOiCA*hEk9i$@u(
z>_2%xPX6KOg(8pfI4J6KQL(534;suVywlUz#h8ACVn@4HMrcsDgKL?1
z^}~IL>v7>LI&!tQVOdI>uuLz1@wx$Pi1U@@No+?@^Ex$?`bcFB-irqnT|PkG0xy^k
zSw;(cy5)rQ1B53!%PMvX^dCXyQxr#oG%Y#l>IkV>7JVMeRPZ8(;%Ty1IZK`dH^c`V
zzx+6{l#@!od@=>s78_J@?6j?-e2m;T7chRRV~nN
zKel45iOZ+;CJvTNj$E$gGCHldW*F0~PDD*43Cpy)a_!u#gK$MMJ(vEGl1#lzi-cRz
z50_)=y&d8>xoU*oT4h0h#T?={7I1m@Q4;y9mHQ43j3RPSr!iqgo%pNL%gAkbCtcMZ
zS)ips%;NwVT+NyqVj3WifW#vrix;`W=|ns_qnKf!NGHPZS+88@<5XU$+)`r3n`SXf
z7zUElzx6_xM--`VNc<@>!+nljMY-0qb7ZL#C4M8iHSsySPk-ZeOZ4oN>f
zqJWp2B`4Quk$W#OuZ~r^x_m(MV`FU5TmFBq@|W(HpbH2cmkZAf90WBrF*cX%8wweJ
zR>Z3)APC@H@A}!n0{MU>wh`C?;`~1l_Keg`&dgcWA*-l8KM%*>KRo`PhB+laI(NR1mJ8O-$xie60(#P?<@aeEZHUE_K0aP>
zL(FgJy+2(D2DSbD@~AmXDTV2DVRXx*(m0tkpI<7Bj~DU~g_wLYMI&};B$*h0!+5^n
zr?vGMXP2kvbVow3^%-;ojun32r^Z9R!<%5DU_ShE!FV@^vV-VRxN+3M@HDkCU+~e)
zX1jy%4%j&C2Yy}ptMieZ_i-$eS|rGZZ9}~QSb7p(QWvrvGT+0#wmm-7^`xLnqs%aQ%etG
zH6K>=aTago%8eI)IbnI-In601yM00S)yjqhezx>}#cyQTu#}6z2Fat!4kpW5=a3{@
zhAVcmd_^!(AHnj$#)K5M2+QM9
zyx#Ue+F)-P;rL;(MlQGRMg4#Vnw~%m
zZSlC%>q>UQ^K;nl`uGul43;sKJun9|57W^c%+Jk%9n)eCY*;(*Mw9>TMdyvl6Cd+h
znV-*Zz9%<1x)KUJy|WnyGo`68l*_oha?5I`=OHi1+@a?yHMgHXcQ;Sr`Kmt-@;uFj
z0bfq{#tzv|x@2E$HgU+HEgutJtku{0^1ax)T0bwt>DJlID8Z$FVn@~7aQKZAZBaB9
zz}pA>P0^2};Cw%xx$^2Hinb()1HIgBr@$v%b@{ojaoF&`7i!1(cF_v!y*A*OH
zyv3moe)?f~RCHrB!3rLQMi=jWa0a1|=Cd5f#T6D`YNM&FJwgY5mxy*;cVOJXSIs&_^c29~-~q^yp5!
zL6EUtZpM=)Gi=rC@_tIuW?gq3opxaB&AvhjFw3KVxe3hp21Z3VUh$(nr#*YYq
z*gBJcbIP9Y!X4Ll>xy8OJ!SOKo(7C@i*~sNTL?`a3|7$a0J~x6GdwRd}YsgUK@#MeDjI@X73yC~gM>!dttX
z2+;U7eGW6Ncv3E_cF+<(hi2rOLI{w*(_o7mpicGuL@PNv(ZhI~q6!VU3cPa)53l6d
zXvf0cy?6fJ#n{`j546nSYltn~q!GQY^G0TNWzjP|_}N1L=Od$iKKPQ>qzGPkS`&YN
zv{2*^HAEWqxM7UPC7YhFYbflHLNj=WXv37z|LLh!KYn
zKCL$o#?FT!q;$wj)E)lz6OM6j*zK&LJVaCJL4qkLH
zd5%{F)6-Pm<*3DXP>~>hRiLAP$!UUvr5Nj?13r7?5y+0}CUt!W<=0u7gigvrm4SFE
z&O#LurGxFfdBHOUH(6+q6q5oJ(;?`7XqvD+!Y4dtg83eh9}ZQ)XfFLOF?h*=rebw+mb5rhCJmT+c=FGvmJol&Yb2$&4_2YxY2I(dIs7xXhHudz!T`JKHh)cE0rhIoTSp+i=FuTQp(-&sq4rvY@7H?X)7C#Yf6
z-$@$4((?xHM>OLwnrB~|v}VOuF?BdqStwPG^NCil=GAgUuM0)>KK2F{6|Y7HM^{fl
z3?~vR(n>THfced|PnSw2=c_2SRJjg(fzUPmMQ>Q8N9LyE1Fjylfdj6*Y#c9pvkUY4
zx>9JcB;*cS#=Opduxwdd35^M-I77c89oEHY6=pE~E7(ThC8befx&VjN+7-
zkbn2%qtD=Em&lZuFc03E{7ZhsOy$Iscdm__bw_=X6myb)fv1yVrZ|}CSv%sam^2=H
zYN`?1g)2&Lmkqn(4z-;3pk3{93gVr7S1r)gY`;VxcAlEU>NVaBVf$+Rg&`XiXK2z5^(zT|0<
zP|WToYq<7*s;V*NgayFU=AAY^VzQGhVS(o~+60~__8AX6k8!tW
z?VQVEiaI&v1v$2wMqJ6Ybj-O1S6$za0Z=4b6l
zj}XC*r}M}8B^AY5c{{D=+rVw?;?Lp(5Q+#}&u4~zGuT_tFA@}g?EbQp=_405;xIe_
z@1TM7dr=&E_mK6f++Ia5@!schm(oQjF}&yxuG$4x;tqvCHB)?e(F@TgQ`Y9XZeGrN
z5rS;^(BV-Ue(17TW~F4sQ;~KqU)L*1-4{EniUcdx>6M@cA$MRy2y$b)e5EGZ%Rw~4
z!@?1NmbtZx$K5seMsQBIKg9TXc$9}bPd?`jF~JSI-8GPUl@1i@5;d{}iy-|sNz0za+
zS%b%3inO;&IW%MuoST-s5W%%@s!?)JSnf}MSXLyfG;p&Z$5pTLD@3qtM${YAdb|a0
z3-h|2)oMfnI-fo*kBTmz>Ab|~z$~8|HLE;7NJK2I@;zwjW5rbXyIH-clGLI!@G^n5
zp|~vVW5V@avCD{deKizIgo{)m+c(Us_Pc$W-g&Tb3ZqGa#l`Dw!;==Tpz}`ZlidA(
zZhC$<4FWS7kxxG$+byF+%ux{OZb)F=yHZZIGw}(mCG#PO7_Ly?hS$*1i%8=217!;N
zw|ZOoH_on5)#tyJf@DnSeW^53KE?BDu&XT`^Ujxz;pxK1r_IC;J~=zZu51Xp@+uA-
z@k-5#ip>;!M(gX3&ih`&WN29XkGKs(5-$*
zgKpjp);-0gqH%{bIC0diy4BWUx0dKyFhk5+UHv@#r66p-b%bDY-tKBa?-X`_yONy^
z5oE(_(9Ct^JAWj?V*KgO7iT{@%VbjtIsx+@q+|zC|ihsKD_=;C6lTF
zv(RclFbvYEgH#o@rDMT7L9haT#HPoMlfKK=pML1@lm`!h@NsB
zUmM~cASYZWd*oC|NN<%ZtFN%3ET>(s+#>e-!x#y#J^;5^ntjgiy$wRqyks67n$jf0
z(Q^)wNrM670R^l!&?nD-5sZX;B|_U3($K5kD)I{nDQ|Si2K;e~RFo@J29d$3wBa^p
zN$JiAyQ)UNDEKc+d}gZ_$YTGaHC4+B8fk0oToSI
zk5Y_UpctAAoXhmF^z^L1;*ORRuBLHnFS^wi=Dw6LEFXfQqldEXxX&HckmtGgc3jC{
z&FF$h7m$+HrM6@d@RDuGB84MZSfYxJR6XYrS>J-I_LFPc|BIXX+qdSI39I2*rPJ=e
zqV%aps%q#6OnZfY^Fim4D)QN@#+^&AF9s{2y+G8T_P)Zu;3bs=!!pmYQs8)zs(wZj>Qmxd5|
zck7rUYNI+@`Y1R#z#)N5wk(3f6bcXJGh-R
zdHlSJv|{cx@6J^8jPhI=y{y)^P`8EG`!4aFB5NJPI6gfO-qE0k-IcC`DMqa6wcDU*4y8=#Ial-;?V=gQ
zV{twAA&!OU0lcdo1W+&(GL<^N0#V0l`c$X4L59_TI#b80uRu?#zLL5GIcOqodY@Xu
zYjUhYq05l$FwIB4d&V4r(YdVd^xZO-VK!?m_4DJdt8%FqjMP*LR8+~|;Ao40YFdj5
zych7*mun|O&x+b`w<)p3>!d>GC9lh=h|2bN)ydLuX)aOtVYwqsc`KX*Z*{4>RuMU;
zA)OF^dKPzQ)bOX?ig=}`#0|AWI>YRutP6fS(ezByu-A^S!(N{83G)&V74&=XJNWwz
zF-Mvl$B)Lx?a`2_hJXj**epyOK@-Gg(hyX(WbGD}t$|FUr~isQOFR#P&)2)=SzA-O
z9o>No2k8YTc;?-=s;(>3r4~u#-KH*egXB1W?U7N`+4xHGE&nz1g6Fd0?VV9<5u4-o
zt9`u%lwCz3^?2*Mh@D<2OfO^7_14CyGQnv=S%U-!pRF*-0h>Pw_G3o_w3GBV+L_dQ
zI}|28qT3w;y@>(?Ji)Km6pM0H8xBo$b-ji{eLSEk*K`OHSjf9L)|%wmOpNnbDBMa#)Q80Gm(yr2ido_uiyIJZ{~3%ws6-yQ&=({_q7Qu+F~ngVI4W*@^J_>JZTlF|a=x#bm{O{oxx^9Xd{CHq!l?JvA
zIu^Xo2%(cbJU}#n#Wfpwoeo996CZ!)&ob~)FrZD>kyZTE8qUe^gW3@CCO}JnbqbgR
zaP5oIzHZ+CJ9m+u(IH1K3l8_9bSYM9F}zE$FXc*BjJvO1Yn3ahC;e~_Eq=?MLlIb;
z*V+b`3X$~HklbGhvW)dA3O-^$$iJ4RXXu_#e2t10FnfgO67{8g^u^0W{i}TkJOPN$
zUxjoqo;{(6Lg3UwKb>Ayt){ep#x+IY8q7hbl*}Dxwc@O|4~}Lty-je`Ab5|?hEUy|
zzc9hZ`L}{H{OAH+jmT}ot35>v+GyzD2eyOoj_VFPe=eqJ_dI1ALm(b(=yADmsYBie
zM7xGD!wIDg;_R=_=f#jeDajpz?74a&I?x1Ld4XS_P`Pbyorl*&@bsA<|3@93|L^r(Sr@1GK7M)n@v}N;Gq`a0^7E5E&q%OiI(+%<>A#dR<9{E%
z{O?m3l68Ni30>uqSg
zDiS)G+=F;cB9_B9^AxA(oKfdds(Gt~Xu>F%NNcArxlvndFXu4~<7lGMVL9_Ejczo>
zW_*9!zAleq%rk#(+nei6z%R|BrLzR36vl8VTDezxjOE1Zsfl3oxRbt=Yd?&^Yi0HN
z_^`-WDdK`5g)0S0&Z=flhq}>g9D<_}Uhi5)d
zP53PMV{+>c>}kQxu_M0NoTv56I$$sRH6*oQX5kvaE@4sg
z#Tz;m%l&C;!`Kv6@`6v5!_&p;!Hw~`n5P)lcal#?1WRd-vFI#&+`*=i6yPtFix=%e
zV_p%ca#?MV?4()2XAuf9#;YP2RkW+)Gc#C8E~_%91-&hWHV^Uoz?~61fCI5roaDS7q35bw$BgjJiU6f))(EC0fHm2&c*9<
zB|?~R;CTDJeQF#+%IB;u;B9FCk0j=8f!P(s}f^SXwN)5Z}V3rHNa-HzCo5
zhVfxuCpV74O2kTZd{!_7nw~VTDw}mDQlsxy?81tjCDfR#XXACGTp@omIKY0pd~fAZ
zE$k*Dpt318DvxdU*ECgYc=-6!8y=?VIo{b)N{9?{-_uRa<1Jp9;weG1PLqHT*duU=
z0C{y(Si|~x%C_;gWZl${uNF(|f!2!sdfsq
z5PC7_$cP32h*S#C(#kYM5Ro~pJo_4!3}%jae2?A|p&*E~Xao%pJa)!A0x;CH$eUBK
zb8LJVA7Dx^s&{B5U`z(vDJYsW=g-PMYM@)#9XuddF9bXB9;)4&4E89CYNvw&SY&QnhbFIldeDtGqfhK9j#{4N}KB
ziRf02$O_sbMsR;yP??B?;ytsQjw%tdg}HZiL3+l;9Z1eo$u{B~d3_
z(GjNG%&9Uo2Tv~Y7C{#QZw>4Du)j=y=Q)>c!L6e8tMq@ySOPPYAc^B-X`@~whc*4%
zjX}pVLPJ$NK8R{MQs~_@gv{bD=Mbi0Dj5_X=On%&3)3l8Mz-J6{m+tu9dl^ArPS|{
zD2T$4k=7UORT=uDllHMUV`FlwNTN`t4ZnPJvg(bJd^QCN;r#Bo?ny9S8&eqnvrH-P
zRE&}tEWLjUm1Y-ZdW!fSNP*$Q8OkR@C=VH~!~0en&>tUD(lOqH)~<9y+k12h;r;K%
zREjD{h*ta?t03{|?PxWFR^xc3M$^|7LA@HZ-)|$FtJ7Q1Jv6&=f@<
zih6(5K;nu55O*eu@jAhs28E=+ZXVPG*kf~4vGuLewg&FaoEW$VDf?|D2HvxNZO1Bf
zczh6lvP5n}?S_tV_iS(9%fx-9OkBU}$|C3YF0sU@jegFcyH-AXOvB>69Bx}KuPl-n
zM&S0te;uth%!a`Vbr4gPpYj6Uo~MxE9JYT{fS{u-c&yw|MuV4jM0sP|r5h2?`)Z)2
z)iLF})gQD5$emfM)Ir-uG|{M(0qm(;Io2%A4Ep9^}hd+~y>?
zO@iR90>*X@?STg(-esK71M`Qu0ZsJa0=AhNzY6w=F{v-J*m;E=+r{E(s43JuNUYsPg-*M)LLv^pBs1D
zU*l>o8=tUG9!CgyZ5DNkmyLhB*+f$L%u)*}A_hJQa2k>hSsA(Y8NqWH(J)Z2rlU6w
z)~GQo!%>%SoDe{UP}w|Om`TfVz{#CY2RrzXT+)mEaHYQ*2pn8