diff --git a/doc/source/_static/simple_example.rst b/doc/source/_static/simple_example.rst index 24270e823a..2f47bbfd71 100644 --- a/doc/source/_static/simple_example.rst +++ b/doc/source/_static/simple_example.rst @@ -1,4 +1,4 @@ -Here's how you would open a result file generated by Ansys MAPDL (or another Ansys solver) and +The following example shows how to open a result file generated by Ansys MAPDL (or another Ansys solver) and extract results: .. code-block:: default diff --git a/doc/source/getting_started/compatibility.rst b/doc/source/getting_started/compatibility.rst index 247f926859..7135347d1b 100644 --- a/doc/source/getting_started/compatibility.rst +++ b/doc/source/getting_started/compatibility.rst @@ -8,7 +8,7 @@ Operating system ---------------- DPF supports Windows 10 and Rocky Linux 8 and later. -To run DPF on CentOS 7, use DPF for 2024R2 (8.2) or older. +To run DPF on CentOS 7, use DPF for 2024 R2 (8.2) or later. For more information, see `Ansys Platform Support `_. Client-server @@ -23,8 +23,8 @@ version. As new features are developed, every attempt is made to ensure backward compatibility from the client to the server. Backward compatibility is generally ensured for -the 4 latest Ansys versions. For example, ``ansys-dpf-core`` module with 0.8.0 version has been -developed for Ansys 2023 R2 pre1 release, for 2023 R2 Ansys version. It is compatible with +the four latest Ansys versions. For example, the ``ansys-dpf-core`` module 0.8.0 has been +developed for the Ansys 2023 R2 version. It is compatible with 2023 R2, 2023 R1, 2022 R2 and 2022 R1 Ansys versions. Starting with version ``0.10`` of ``ansys-dpf-core``, the packages ``ansys-dpf-gate``, @@ -34,8 +34,8 @@ and prevent synchronization issues between the PyDPF libraries, requiring to dro previous to 2022 R2. **Ansys strongly encourages you to use the latest packages available**, as far they are compatible -with the Server version you want to run. Considering Ansys 2023 R1 for example, if ``ansys-dpf-core`` -module with 0.10.0 version is the latest available compatible package, it should be used. +with the server version you want to run. Considering Ansys 2023 R1 for example, if ``ansys-dpf-core`` +module 0.10.0 is the latest available compatible package, it should be used. For ``ansys-dpf-core<0.10``, the `ansys.grpc.dpf `_ package should also be synchronized with the server version. diff --git a/doc/source/getting_started/dpf_server.rst b/doc/source/getting_started/dpf_server.rst index b45ff7fe6d..5b7986582d 100644 --- a/doc/source/getting_started/dpf_server.rst +++ b/doc/source/getting_started/dpf_server.rst @@ -16,7 +16,7 @@ The first standalone version of DPF Server available is 6.0 (2023 R2). The sections on this page describe how to install and use a standalone DPF Server. -* For a quick start on using PyDPF, see :ref:`ref_getting_started`. +* For a brief overview on using PyDPF, see :ref:`ref_getting_started`. * For more information on DPF and its use, see :ref:`ref_user_guide`. @@ -65,7 +65,7 @@ PyDPF-Core is a Python client API communicating with a **DPF Server**, either through the network using gRPC or directly in the same process. PyDPF-Post is a Python module for postprocessing based on PyDPF-Core. -Both PyDPF-Core and PyDPF-Post can be used with DPF Server. Installation instructions +Both PyDPF-Core and PyDPF-Post can be used with the DPF Server. Installation instructions for PyDPF-Core are available in the PyDPF-Core `Getting started `_. Installation instructions for PyDPF-Post are available in the PyDPF-Post `Getting started `_. @@ -98,10 +98,10 @@ to use thanks to its ``ansys_path`` argument. PyDPF otherwise follows the logic below to automatically detect and choose which locally installed version of DPF Server to run: -- it uses the ``ANSYS_DPF_PATH`` environment variable in priority if set and targeting a valid path to a DPF Server installation. -- it then checks the currently active Python environment for any installed standalone DPF Server, and uses the latest version available. -- it then checks for ``AWP_ROOTXXX`` environment variables, which are set by the **Ansys installer**, and uses the latest version available. -- if then raises an error if all of the steps above failed to return a valid path to a DPF Server installation. +- It uses the ``ANSYS_DPF_PATH`` environment variable in priority if set and targeting a valid path to a DPF Server installation. +- It then checks the currently active Python environment for any installed standalone DPF Server, and uses the latest version available. +- It then checks for ``AWP_ROOTXXX`` environment variables, which are set by the **Ansys installer**, and uses the latest version available. +- It then raises an error if all of the preceding steps failed to return a valid path to a DPF Server installation. Run DPF Server in a Docker container ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -111,7 +111,8 @@ DPF Server can be run in a Docker container. in :ref:`Install DPF Server `, download the ``Dockerfile`` file. #. Optional: download any other plugin ZIP file as appropriate. For example, to access the ``composites`` plugin for Linux, download ``ansys_dpf_composites_lin_v2025.1.pre0.zip``. -#. Copy all the ZIP files and ``Dockerfile`` file in a folder and navigate into that folder. +#. Copy all the ZIP files and the ``Dockerfile`` file into a folder. +#. Navigate into the folder used in the previous step. #. To build the DPF Docker container, run the following command: .. code:: diff --git a/doc/source/getting_started/index.rst b/doc/source/getting_started/index.rst index 35997953b3..a13f5b93a5 100644 --- a/doc/source/getting_started/index.rst +++ b/doc/source/getting_started/index.rst @@ -6,7 +6,7 @@ Getting started The Data Processing Framework (DPF) provides numerical simulation users and engineers with a toolbox for accessing and transforming simulation data. DPF can access data from Ansys solver -result files as well as from several neutral (see :ref:`ref_main_index`). +result files as well as from several neutral file formats. For more information, see :ref:`ref_main_index`. This **workflow-based** framework allows you to perform complex preprocessing and postprocessing operations on large amounts of simulation data. diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst index 3e14ed4543..50f065b823 100644 --- a/doc/source/getting_started/install.rst +++ b/doc/source/getting_started/install.rst @@ -16,8 +16,8 @@ with this command: pip install ansys-dpf-core -PyDPF-Core plotting capabilities require to have `PyVista `_ installed. -To install PyDPF-Core with its optional plotting functionalities, use: +PyDPF-Core plotting capabilities require you to have `PyVista `_ installed. +To install PyDPF-Core with its optional plotting functionalities, run this command: .. code:: @@ -62,10 +62,10 @@ then use the following command from within this local directory: pip install --no-index --find-links=. ansys-dpf-core -Beware that PyDPF-Core wheelhouses do not include the optional plotting dependencies. -To allow for plotting capabilities, also download the wheels corresponding to your platform and Python interpreter version +Note that PyDPF-Core wheelhouses do not include the optional plotting dependencies. +To use the plotting capabilities, also download the wheels corresponding to your platform and Python interpreter version for `PyVista `_ and -`matplotlib `_, then place them in the same previous local directory and run the command above. +`matplotlib `_. Then, place them in the same local directory and run the preceding command. Install in development mode diff --git a/doc/source/getting_started/licensing.rst b/doc/source/getting_started/licensing.rst index 7138da7025..3d07bc4601 100644 --- a/doc/source/getting_started/licensing.rst +++ b/doc/source/getting_started/licensing.rst @@ -4,11 +4,10 @@ Licensing ========= -This section details how to properly set up licensing, as well as what the user should expect in -terms of limitations or license usage when running PyDPF scripts. +This section describes how to properly set up licensing, as well as limitations and license usage when running PyDPF scripts. -DPF follows a client-server architecture, which means that the PyDPF client library must interact with a running DPF Server. -It either starts a DPF Server via a local installation of DPF Server, or it connects to an already running local or remote DPF Server. +DPF follows a client-server architecture, so the PyDPF client library must interact with a running DPF Server. +It either starts a DPF Server via a local DPF Server installation, or it connects to an already running local or remote DPF Server. DPF Server is packaged within the **Ansys installer** in Ansys 2021 R1 and later. It is also available as a standalone application. @@ -20,12 +19,12 @@ For more information on installing DPF Server, see :ref:`ref_dpf_server`. License terms ------------- -When using the DPF Server from an Ansys installation, the user has already agreed to the licensing +When using the DPF Server from an Ansys installation, you have already agreed to the licensing terms when installing Ansys. -When using a standalone DPF Server, the user must accept the ``DPF Preview License Agreement`` +When using a standalone DPF Server, you must accept the ``DPF Preview License Agreement`` by following the indications below. -Starting a DPF Server without agreeing to the ``DPF Preview License Agreement`` throws an exception. +Starting a DPF Server without agreeing to the ``DPF Preview License Agreement`` creates an exception. DPF Preview License Agreement ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -51,7 +50,7 @@ existing license for the edition and version of DPF Server that you intend to us Configure licensing ------------------- -If your machine does not have a local Ansys installation, you need to define where DPF should look for a valid license. +If your machine does not have a local Ansys installation, you must define where DPF should look for a valid license. To use a local license file, set the ``ANSYSLMD_LICENSE_FILE`` environment variable to point to an Ansys license file ````: @@ -85,12 +84,12 @@ License checks and usage ------------------------ Some DPF operators require DPF to check for an existing license -and some require DPF to check-out a compatible license increment. +and some require DPF to checkout a compatible license increment. -DPF is by default allowed to check-out license increments as needed. +DPF is by default allowed to checkout license increments as needed. To change this behavior, see :ref:`here `. -To know if operators require a license increment check-out to run, check their ``license`` +To know if operators require a license increment checkout to run, check their ``license`` attribute in :ref:`ref_dpf_operators_reference` or directly in Python by checking the operator's properties for a ``license`` key: @@ -109,13 +108,13 @@ properties for a ``license`` key: To check which Ansys licensing increments correspond to ``any_dpf_supported_increments``, -see :ref:`here`. +see :ref:`Compatible Ansys license increments`. -Even if an operator does not require a license check-out to run, most DPF operators still require +Even if an operator does not require a license checkout to run, most DPF operators still require DPF to check for a reachable license server or license file. -Operators which do not perform any kind of license check are source operators (data extraction -operators) which do not perform any data transformation. +Operators that do not perform any kind of license check are source operators (data extraction +operators). These operators do not perform any data transformation. For example, when considering result operators, they perform data transformation if the requested location is not the native result location. In that case, averaging occurs which is considered diff --git a/doc/source/operator_reference.rst b/doc/source/operator_reference.rst index d2d8b68455..fd0eaacd41 100644 --- a/doc/source/operator_reference.rst +++ b/doc/source/operator_reference.rst @@ -4,16 +4,7 @@ Operators ========= -DPF operators provide for manipulating and transforming simulation data. - -From DPF Server for Ansys 2023 R2 and later, the licensing logic for operators in DPF depend on the active -`ServerContext `_. - -The available contexts are **Premium** and **Entry**. -Licensed operators are marked as in the documentation using the ``license`` property. -Operators with the ``license`` property as **None** do not require a license check-out. -For more information about using these two contexts, see :ref:`user_guide_server_context`. -Click below to access the operators documentation. +DPF operators allow you to manipulate and transform simulation data. .. grid:: 1 @@ -32,9 +23,17 @@ Click below to access the operators documentation. :click-parent: +For Ansys 2023 R2 and later, the DPF Server licensing logic for operators in DPF depends on the active +`server context`_. + +The available contexts are **Premium** and **Entry**. +Licensed operators are marked as such in the documentation using the ``license`` property. +Operators with the ``license`` property set to **None** do not require a license checkout. +For more information on using these two contexts, see :ref:`user_guide_server_context`. + .. note:: - For Ansys 2023 R1 and earlier, the context is equivalent to Premium, with all operators loaded. + For Ansys 2023 R1 and earlier, the context is equivalent to **Premium**, with all operators loaded. For DPF Server 2023.2.pre0 specifically, the server context defines which operators are loaded and accessible. Use the `PyDPF-Core 0.7 operator documentation `_ to learn more. Some operators in the documentation might not be available for a particular server version. diff --git a/doc/source/user_guide/concepts/concepts.rst b/doc/source/user_guide/concepts/concepts.rst index c2b9bb3ac2..bb19474926 100644 --- a/doc/source/user_guide/concepts/concepts.rst +++ b/doc/source/user_guide/concepts/concepts.rst @@ -3,7 +3,7 @@ ================== Terms and concepts ================== -DPF sees **fields of data**, not physical results. This makes DPF a +DPF uses **fields of data**, not physical results. This makes DPF a very versatile tool that can be used across teams, projects, and simulations. @@ -11,25 +11,25 @@ Key terms --------- Here are descriptions for key DPF terms: -- **Data source:** One or more files containing analysis results. -- **Field:** Main simulation data container. -- **Field container:** For a transient, harmonic, modal, or multi-step +- **Data source**: One or more files containing analysis results. +- **Field**: Main simulation data container. +- **Fields container**: For a transient, harmonic, modal, or multi-step static analysis, a set of fields, with one field for each time step or frequency. -- **Location:** Type of topology associated with the data container. DPF +- **Location**: Type of topology associated with the data container. DPF uses three different spatial locations for finite element data: ``Nodal``, ``Elemental``, and ``ElementalNodal``. -- **Operators:** Objects that are used to create, transform, and stream the data. +- **Operators**: Objects that are used to create, transform, and stream the data. An operator is composed of a **core** and **pins**. The core handles the calculation, and the pins provide input data to and output data from the operator. -- **Scoping:** Spatial and/or temporal subset of a model's support. -- **Support:** Physical entity that the field is associated with. For example, +- **Scoping**: Spatial and/or temporal subset of a model's support. +- **Support**: Physical entity that the field is associated with. For example, the support can be a mesh, geometrical entity, or time or frequency values. -- **Workflow:** Global entity that is used to evaluate the data produced +- **Workflow**: Global entity that is used to evaluate the data produced by chained operators. -- **Meshed region:** Entity describing a mesh. Node and element scopings, - element types, connectivity (list of node indices composing each element) and +- **Meshed region**: Entity describing a mesh. Node and element scopings, + element types, connectivity (list of node indices composing each element), and node coordinates are the fundamental entities composing the meshed region. Scoping diff --git a/doc/source/user_guide/concepts/stepbystep.rst b/doc/source/user_guide/concepts/stepbystep.rst index 55b9caad74..d4cb74e94f 100644 --- a/doc/source/user_guide/concepts/stepbystep.rst +++ b/doc/source/user_guide/concepts/stepbystep.rst @@ -23,7 +23,7 @@ Data can come from two sources: - **Manual input in DPF:** You can create fields of data in DPF. Once you specify data sources or manually create fields in DPF, -you can create field containers (if applicable) and define scopings to +you can create fields containers (if applicable) and define scopings to identify the subset of data that you want to evaluate. Specify the data source @@ -103,27 +103,27 @@ This code shows how to define a mesh scoping: my_scoping.location = "Nodal" #optional my_scoping.ids = list(range(1,11)) -Define field containers -~~~~~~~~~~~~~~~~~~~~~~~ -A **field container** holds a set of fields. It is used mainly for +Define fields containers +~~~~~~~~~~~~~~~~~~~~~~~~ +A **fields container** holds a set of fields. It is used mainly for transient, harmonic, modal, or multi-step analyses. This image explains its structure: .. image:: ../../images/drawings/field-con-overview.png -A field container is a vector of fields. Fields are ordered with labels -and IDs. Most commonly, a field container is scoped on the time label, +A fields container is a vector of fields. Fields are ordered with labels +and IDs. Most commonly, a fields container is scoped on the time label, and the IDs are the time or frequency sets: .. image:: ../../images/drawings/field-con.png -You can define a field container in multiple ways: +You can define a fields container in multiple ways: - Extract labeled data from a result file. -- Create a field container from a CSV file. -- Convert existing fields to a field container. +- Create a fields container from a CSV file. +- Convert existing fields to a fields container. -This code shows how to define a field container from scratch: +This code shows how to define a fields container from scratch: .. code-block:: python @@ -137,9 +137,9 @@ This code shows how to define a field container from scratch: mscop = {"time":i+1,"complex":1} fc.add_field(mscop,dpf.Field(nentities=i+10)) -Some operators can operate directly on field containers instead of fields. -Field containers are identified by ``fc`` suffixes in their names. -Operators and field containers are explained in more detail +Some operators can operate directly on fields containers instead of fields. +Fields containers are identified by ``fc`` suffixes in their names. +Operators and fields containers are explained in more detail in :ref:`transform_the_data`. .. _transform_the_data: @@ -155,18 +155,16 @@ Use operators You use operators to import, export, transform, and analyze data. An operator is analogous to an integrated circuit in electronics. It -has a set of input and output pins. Pins provide for passing data to -and from operators. +has a set of input and output pins. Pins pass data to and from operators. -An operator takes input from a field, field container, or scoping using +An operator takes input from a field, fields container, or scoping using an input pin. Based on what it is designed to do, the operator computes -an output that it passes to a field or field container using an output pin. +an output that it passes to a field or fields container using an output pin. .. image:: ../../images/drawings/circuit.png Comprehensive information on operators is available in :ref:`ref_dpf_operators_reference`. -In the **Available Operators** area for either the **Entry** or **Premium** operators, -you can either type a keyword in the **Search** option +In the **Available Operators** area, you can either type a keyword in the **Search** option or browse by operator categories: .. image:: ../../images/drawings/help-operators.png diff --git a/doc/source/user_guide/concepts/waysofusing.rst b/doc/source/user_guide/concepts/waysofusing.rst index 4887669f38..01db500e5e 100644 --- a/doc/source/user_guide/concepts/waysofusing.rst +++ b/doc/source/user_guide/concepts/waysofusing.rst @@ -3,52 +3,50 @@ ======================================== DPF capabilities and scripting languages ======================================== +DPF is a framework that provides data computation capabilities. -DPF as a Framework enabling data computation capabilities ---------------------------------------------------------- +DPF as a framework +------------------ -DPF application: kernel and operator's libraries -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +DPF application: kernel and operator libraries +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -DPF is a framework that provides data computation capabilities. These capabilities are provided -through libraries of operators. To learn more about the computed data and the operator concepts, see :ref:`user_guide_concepts`. +DPF capabilities are provided through libraries of operators. +To learn more about the computed data and the operator concepts, see :ref:`user_guide_concepts`. A DPF application is always composed of a kernel (DataProcessingCore and DPFClientAPI binaries), that enables capabilities by loading libraries of operators (for example, mapdlOperatorsCore library is basic library enabled by DPF). This application is also called a **DPF Server application**. -When starting a DPF application, you can customize the list of operator's libraries that the kernel loads. +When starting a DPF application, you can customize the list of the libraries that the kernel loads. To learn more on how to customize the initialization of a DPF application, see :ref:`user_guide_xmlfiles`. DPF client: available APIs and languages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -DPF is a framework that provides data computation capabilities. These capabilities are -enabled using the DPF Server application. +DPF capabilities are enabled using the DPF Server application. These capabilities can be accessed through client APIs, as shown here: .. image:: ../../images/drawings/apis_2.png -1. DPF server application can be accessed using Ansys Inc product, or DPF Server package (see :ref:`ref_dpf_server`) available on the Customer portal. +- The DPF server application can be accessed using an Ansys product, or DPF Server package (see :ref:`ref_dpf_server`) available on the Customer portal. +- Several client APIs are available (CPython, IronPython, and C++). +- Communication in the same process, or through gRPC, allows you to have the client and the servers on different machines. -2. Several client APIs are available (CPython, IronPython, C++, and so on). +Note that **IronPython and CPython APIs are different**, each has a specific syntax. -3. Communication in the same process, or through gRPC, allows you to have the client and the servers on different machines. - -Note that **IronPython and CPython APIs are different**, each has specific syntax. - -The **list of available operators when using DPF is independent from the language or API which is used**, it only depends +The **list of available operators when using DPF is independent from the language or API which is used**. It depends only on how the DPF application has been initialized. -Most of the DPF capabilities can be accessed using the operators. For more information about the existing operators, see the **Operators** tab. +Most of the DPF capabilities can be accessed using the operators. For more information on the existing operators, see :ref:`ref_dpf_operators_reference`. Enhance DPF capabilities ~~~~~~~~~~~~~~~~~~~~~~~~ -The available DPF capabilities loaded in a DPF application can be enhanced by creating new operator's libraries. +The available DPF capabilities loaded in a DPF application can be enhanced by creating new operator libraries. DPF offers multiple development APIs depending on your environment. These plugins can be: - CPython based (see :ref:`user_guide_custom_operators`) diff --git a/doc/source/user_guide/custom_operators.rst b/doc/source/user_guide/custom_operators.rst index 1e211bd5b0..9aa1aa21ee 100644 --- a/doc/source/user_guide/custom_operators.rst +++ b/doc/source/user_guide/custom_operators.rst @@ -13,7 +13,7 @@ With support for custom operators, PyDPF-Core becomes a development tool offerin - **Accessibility:** A simple script can define a basic operator plugin. -- **Componentization:** Operators with similar applications can be grouped in Python plug-in packages. +- **Componentization:** Operators with similar applications can be grouped in Python plugin packages. - **Easy distribution:** Standard Python tools can be used to package, upload, and download custom operators. @@ -29,10 +29,10 @@ For more information, see :ref:`ref_user_guide_operators`. Install module -------------- -Once an Ansys-unified installation is complete, you must install the ``ansys-dpf-core`` module in the Ansys +Once an Ansys unified installation is complete, you must install the ``ansys-dpf-core`` module in the Ansys installer's Python interpreter. -#. Download the script for you operating system: +#. Download the script for your operating system: - For Windows, download this :download:`PowerShell script `. - For Linux, download this :download:`Shell script ` @@ -44,7 +44,7 @@ installer's Python interpreter. - ``-pip_args``: Optional arguments to add to the ``pip`` command. For example, ``--extra-index-url`` or ``--trusted-host``. -If you ever want to uninstall the ``ansys-dpf-core`` module from the Ansys installation, you can do so. +To uninstall the ``ansys-dpf-core`` module from the Ansys installation: #. Download the script for your operating system: @@ -59,11 +59,11 @@ If you ever want to uninstall the ``ansys-dpf-core`` module from the Ansys insta Create operators ---------------- -You can create a basic operator plugin or a plug-in package with multiple operators. +You can create a basic operator plugin or a plugin package with multiple operators. Basic operator plugin ~~~~~~~~~~~~~~~~~~~~~ -To create a basic operator plugin, you write a simple Python script. An operator implementation +To create a basic operator plugin, write a simple Python script. An operator implementation derives from the :class:`ansys.dpf.core.custom_operator.CustomOperatorBase` class and a call to the :func:`ansys.dpf.core.custom_operator.record_operator` method. @@ -78,32 +78,32 @@ This example script shows how you create a basic operator plugin: record_operator(CustomOperator, *args) -In the various properties for the class, you specify the following: +In the various properties for the class, specify the following: - Name for the custom operator - Description of what the operator does -- Dictionary for each input and output pin, which includes the name, a list of supported types, a description, +- Dictionary for each input and output pin. This dictionary includes the name, a list of supported types, a description, and whether it is optional and/or ellipsis (meaning that the specification is valid for pins going from pin number *x* to infinity) - List for operator properties, including name to use in the documentation and code generation and the - operator category. The optional ``license`` property allows to define a required license to check out + operator category. The optional ``license`` property lets you define a required license to check out when running the operator. Set it equal to ``any_dpf_supported_increments`` to allow any license currently accepted by DPF (see :ref:`here`) For comprehensive examples on writing operator plugins, see :ref:`python_operators`. -Plug-in package with multiple operators -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To create a plug-in package with multiple operators or with complex routines, you write a +Plugin package with multiple operators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +To create a plugin package with multiple operators or with complex routines, write a Python package. The benefits of writing packages rather than simple scripts are: - **Componentization:** You can split the code into several Python modules or files. - **Distribution:** You can use standard Python tools to upload and download packages. - **Documentation:** You can add README files, documentation, tests, and examples to the package. -A plug-in package with dependencies consists of a folder with the necessary files. Assume -that the name of your plug-in package is ``custom_plugin``. A folder with this name would +A plugin package with dependencies consists of a folder with the necessary files. Assume +that the name of your plugin package is ``custom_plugin``. A folder with this name would contain four files: - ``__init__.py`` @@ -149,7 +149,7 @@ Third-party dependencies .. include:: custom_operators_deps.rst -Assume once again that the name of your plug-in package is ``custom_plugin``. +Assume once again that the name of your plugin package is ``custom_plugin``. A folder with this name would contain these files: - ``__init__.py`` @@ -232,7 +232,7 @@ For a plugin that is a single script, the second argument should be ``py_`` plus "py_custom_plugin", #if the load_operators function is defined in path/to/plugins/custom_plugin.py "load_operators") -For a plug-in package, the second argument should be ``py_`` plus any name: +For a plugin package, the second argument should be ``py_`` plus any name: .. code:: diff --git a/doc/source/user_guide/custom_operators_deps.rst b/doc/source/user_guide/custom_operators_deps.rst index 754b587947..8872d77fd6 100644 --- a/doc/source/user_guide/custom_operators_deps.rst +++ b/doc/source/user_guide/custom_operators_deps.rst @@ -1,20 +1,20 @@ -To add third-party modules as dependencies to a plug-in package, you should create +To add third-party modules as dependencies to a plugin package, create and reference a folder or ZIP file with the sites of the dependencies in an XML file -located next to the folder for the plug-in package. The XML file must have the same -name as the plug-in package plus an ``.xml`` extension. +located next to the folder for the plugin package. The XML file must have the same +name as the plugin package plus an ``.xml`` extension. When the :py:func:`ansys.dpf.core.core.load_library` method is called, PyDPF-Core uses the ``site`` Python module to add custom sites to the path for the Python interpreter. -To create these custom sites, do the following: +To create these custom sites: -#. Install the requirements of the plug-in package in a Python virtual environment. -#. Remove unnecessary folders from the site packages and compress them to a ZIP file. -#. Place the ZIP file in the plug-in package. +#. Install the requirements of the plugin package in a Python virtual environment. +#. Remove unnecessary folders from the site packages and compress them into a ZIP file. +#. Place the ZIP file in the plugin package. #. Reference the path to the ZIP file in the XML file as indicated above. -To simplify this step, you can add a requirements file in the plug-in package: +To simplify this step, you can add a requirements file in the plugin package: .. literalinclude:: /examples/07-python-operators/plugins/gltf_plugin/requirements.txt @@ -28,7 +28,7 @@ For this approach, do the following: 3. Run the downloaded script with the mandatory arguments: - - ``-pluginpath``: Path to the folder with the plug-in package. + - ``-pluginpath``: Path to the folder with the plugin package. - ``-zippath``: Path and name for the ZIP file. Optional arguments are: diff --git a/doc/source/user_guide/fields_container.rst b/doc/source/user_guide/fields_container.rst index b5f8fba8b4..ec0440f321 100644 --- a/doc/source/user_guide/fields_container.rst +++ b/doc/source/user_guide/fields_container.rst @@ -1,12 +1,11 @@ .. _ref_user_guide_fields_container: -=========================== -Fields container and fields -=========================== -While DPF uses operators to load and operate on data, it uses field containers -and fields to store and return data. In other words, operators are like verbs, -acting on the data, while field containers and fields are like nouns, objects -that hold data. +============================ +Fields containers and fields +============================ +While DPF uses operators to load and operate on data, it uses fields containers +and fields to store and return data. Operators are like verbs, acting on the data, +while fields containers and fields are like nouns, objects that hold data. Access a fields container or field ----------------------------------- @@ -61,7 +60,7 @@ This example uses the ``elastic_strain`` operator to access a fields container: Access fields within a fields container --------------------------------------- -Many methods are available for accessing a field in a fields +Many methods are available for accessing a field in a field container. The preceding results contain a transient result, which means that the fields container has one field by time set. @@ -127,7 +126,7 @@ To access fields for more complex requests, you can use the ... -Here is a more real-word example: +Here is a more real-world example: .. code-block:: @@ -409,7 +408,7 @@ Note that this array is a genuine, local, numpy array (overloaded by the DPFArra -If you need to access an individual node or element, request it +To access an individual node or element, request it using either the ``get_entity_data()`` or ``get_entity_data_by_id()`` method: Get the data from the first element in the field. @@ -480,7 +479,7 @@ get the index of element 29 in the field with: 29 -Here the data for the element with ID 10 is made of 8 symmetrical tensors. +Here the data for the element with ID 10 is made of eight symmetrical tensors. The elastic strain has one tensor value by node by element (ElementalNodal location) To get the displacement on node 3, you would use: diff --git a/doc/source/user_guide/operators.rst b/doc/source/user_guide/operators.rst index 6671524858..df2c8a647a 100644 --- a/doc/source/user_guide/operators.rst +++ b/doc/source/user_guide/operators.rst @@ -147,7 +147,7 @@ Because several other examples use the ``Model`` class, this example uses the result key: rst and path: path\...\ansys\dpf\core\examples\model_with_ns.rst Secondary files: -This code shows how to connect the data source to the displacement operator: +This code demonstrates how to connect the data source to the displacement operator: .. code-block:: python @@ -306,7 +306,7 @@ DPF provides three main types of operators: Operators for importing or reading data *************************************** -These operators provide for reading data from solver files or from standard file types +These operators read data from solver files or from standard file types such as .RST (MAPDL), .D3Plot (LS DYNA), .CAS.H5/.DAT.H5 (Fluent) or .CAS.CFF/.DAT.CFF (CFX). To read these files, different readers are implemented as plugins. diff --git a/doc/source/user_guide/plotting.rst b/doc/source/user_guide/plotting.rst index cf489aede3..3e355552ee 100644 --- a/doc/source/user_guide/plotting.rst +++ b/doc/source/user_guide/plotting.rst @@ -1,8 +1,9 @@ .. _user_guide_plotting: -==== -Plot -==== +======== +Plotting +======== + DPF-Core has a variety of plotting methods for generating 3D plots of Ansys models directly from Python. These methods use VTK and leverage the `PyVista `_ library to @@ -74,7 +75,7 @@ First, extract the X component strain Data:1 components and 69120 elementary data -This ElementalNodal strain must be converted to nodal strain for it to be plotted. +This ElementalNodal strain must be converted to a nodal strain for it to be plotted. .. code-block:: diff --git a/doc/source/user_guide/server_context.rst b/doc/source/user_guide/server_context.rst index 04d21830bc..d92ae21587 100644 --- a/doc/source/user_guide/server_context.rst +++ b/doc/source/user_guide/server_context.rst @@ -23,9 +23,7 @@ Two main licensing context type capabilities are available: - **Entry:** This context does not allow DPF to perform any license checkout, meaning that licensed DPF operators fail. -For the operator list for each licensing context type, see :ref:`ref_dpf_operators_reference`. -The **Premium** operators reference includes licensed DPF operators. -The **Entry** operators reference only includes unlicensed DPF operators. +For the operator list, see :ref:`ref_dpf_operators_reference`. Change server context from Entry to Premium ------------------------------------------- @@ -66,7 +64,7 @@ Change the default server context The default context for the server is **Premium**. You can change the context using the ``ANSYS_DPF_SERVER_CONTEXT`` environment variable. For more information, see -the :module: `` module). You can also change the server context +the :module: `` module. You can also change the server context with this code: .. code-block:: @@ -83,7 +81,7 @@ with this code: .. warning:: As starting an ``InProcess`` server means linking the DPF binaries to your current Python - process, you cannot start a new ``InProcess`` server. Thus, if your local ``InProcess`` server + process, you cannot start a new ``InProcess`` server. If your local ``InProcess`` server is already **Premium**, you cannot set it back as **Entry**. ``InProcess`` being the default server type, the proper commands to work as **Entry** should be set at the start of your script. diff --git a/doc/source/user_guide/troubleshooting.rst b/doc/source/user_guide/troubleshooting.rst index de523f7b90..ef16f929fc 100644 --- a/doc/source/user_guide/troubleshooting.rst +++ b/doc/source/user_guide/troubleshooting.rst @@ -23,7 +23,7 @@ where ``VER`` is the three-digit numeric format for the version, such as ``221`` Connect to the DPF server ~~~~~~~~~~~~~~~~~~~~~~~~~ -If an issue appears while using Py-DPF code to connect to an initialized server with the +If an issue appears while using PyDPF code to connect to an initialized server with the :py:meth:`connect_to_server() ` method, ensure that the IP address and port number that are set as parameters are applicable for a DPF server started on the network. @@ -37,7 +37,7 @@ Assume that you are importing the ``PyDPF-Core`` package: from ansys.dpf import core as dpf If an error lists missing modules, see :ref:`ref_compatibility`. -For ``PyDPF-Core``<0.10.0, the `ansys.grpc.dpf `_ module +For ``PyDPF-Core``0.10.0, the `ansys.grpc.dpf `_ module should always be synchronized with its server version. .. _user_guide_troubleshooting_model_issues: @@ -81,7 +81,7 @@ When trying to plot a result with DPF, the following error might be raised: ModuleNotFoundError: No module named 'pyvista' -In that case, simply install `PyVista `_` with this command: +In that case, install `PyVista `_` with this command: .. code-block:: default diff --git a/doc/source/user_guide/xmlfiles.rst b/doc/source/user_guide/xmlfiles.rst index e98de65927..735445ca0f 100644 --- a/doc/source/user_guide/xmlfiles.rst +++ b/doc/source/user_guide/xmlfiles.rst @@ -111,7 +111,7 @@ element, for example, would only be used with a debug version of the The element names for plugins, such as ```` and ````, are used as **keys** when loading plugins. Each plugin must have a unique key. -The element for each plug-in has child elements: +The element for each plugin has child elements: - ````: Contains the location of the plugin to load. The normal mechanism that the operating system uses to find a DLL or SO file is used. The DLL