Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENG-1332: Started implementing new aixplain programmatic api #356

Merged
merged 40 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
310f591
ENG-1332: Started implementing new aixplain programmatic api
kadirpekel Jan 6, 2025
250d608
ENG-1332: removed legacy counterparts
kadirpekel Jan 7, 2025
47d877c
ENG-1332: many improvements
kadirpekel Jan 7, 2025
d6766f5
ENG-1332: reorganization on the module structure
kadirpekel Jan 7, 2025
82e59ab
ENG-1332: packaging tweak
kadirpekel Jan 7, 2025
c82f891
ENG-1332: enums autogenerated with foundation
kadirpekel Jan 8, 2025
63ff3f4
ENG-1332: fixed and completed enums foundation
kadirpekel Jan 8, 2025
e7cbcd9
ENG-1332: bug fix
kadirpekel Jan 8, 2025
dbc36d8
ENG-1332: generate.py called with prod account
kadirpekel Jan 8, 2025
7eba55b
ENG-1332: enums.py cleaned up
kadirpekel Jan 8, 2025
af2aa3a
ENG-1332: relocated v2 module into main aixplain module
kadirpekel Jan 8, 2025
e280fdc
ENG-1332: fixed dynamic enum assignment
kadirpekel Jan 8, 2025
37b2e00
ENG-1332: static enum and resource definition on main class
kadirpekel Jan 8, 2025
0473a1f
ENG-1332: first set of unit tests for v2
kadirpekel Jan 10, 2025
a2261c7
ENG-1332: base resource unit tests for v2
kadirpekel Jan 10, 2025
dd429a4
ENG-1332: completed critical unit tests for v2
kadirpekel Jan 10, 2025
d6d7d5e
ENG-1332: fixed type hints to make autocompletion work well
kadirpekel Jan 14, 2025
1da9d6f
ENG-1332: Many improvements on class structure
kadirpekel Jan 14, 2025
8c67d8a
ENG-1332: Added missing resource implementations
kadirpekel Jan 14, 2025
44c52f0
ENG-1331: Fixed unit tests
kadirpekel Jan 20, 2025
7c1b57f
added api key
xainaz Jan 20, 2025
bdfa2c2
removed api-key
xainaz Jan 21, 2025
b285d04
ENG-1332: added missing factory methods. revisited typeddict optional…
kadirpekel Jan 21, 2025
38af85f
ENG-1332: Main class constructor revisited
kadirpekel Jan 21, 2025
fdd9603
ENG-1332: fixed tests
kadirpekel Jan 22, 2025
910aba4
ENG-1440: refactoring factories to aiXplain SDK v2 (#371)
thiago-aixplain Jan 22, 2025
a4cc2e2
ENG-1332: progress on paging
kadirpekel Jan 22, 2025
413f5e6
ENG-1332: removed Resource/Page wrappers
kadirpekel Jan 22, 2025
fbfe1c3
ENG-1332: Got the v2 testable through functional tests
kadirpekel Jan 23, 2025
111cb82
ENG-1332: factory var name restored to original
kadirpekel Jan 23, 2025
0305520
ENG-1332: quick revisit all functional tests to employ v2 counterparts
kadirpekel Jan 23, 2025
fe5a624
ENG-1332: Fixed many frictions in functional test adoption
kadirpekel Jan 24, 2025
5d22724
Last fixes in repo
thiago-aixplain Jan 26, 2025
fb3b832
Adding Team agent class on SDK v2
thiago-aixplain Jan 26, 2025
4e66d75
Breaking down general assets
thiago-aixplain Jan 26, 2025
c9959a9
ENG-1332: parameterized general asset tests
kadirpekel Jan 28, 2025
bd0b8cc
ENG-1332: Fixed team agent tests
kadirpekel Jan 28, 2025
656b905
Fixing APIKey getter in v2
thiago-aixplain Jan 29, 2025
2f44dfb
ENG-1332: Several bug fixes
kadirpekel Jan 30, 2025
090b562
Merge Development
thiago-aixplain Feb 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 11 additions & 12 deletions aixplain/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,24 +20,23 @@
limitations under the License.
"""

# Set default logging handler to avoid "No handler found" warnings.
import os
import logging
from logging import NullHandler
from dotenv import load_dotenv

load_dotenv()

from .v2.core import Aixplain # noqa

LOG_LEVEL = os.environ.get("LOG_LEVEL", "INFO").upper()
logging.basicConfig(level=LOG_LEVEL)
logging.getLogger(__name__).addHandler(NullHandler())

# Load Environment Variables
from dotenv import load_dotenv

load_dotenv()
aixplain_v2 = None
try:
aixplain_v2 = Aixplain()
except Exception:
pass

# Validate API keys
from aixplain.utils import config

if config.TEAM_API_KEY == "" and config.AIXPLAIN_API_KEY == "":
raise Exception(
"'TEAM_API_KEY' has not been set properly and is empty. For help, please refer to the documentation (https://github.com/aixplain/aixplain#api-key-setup)"
)
__all__ = ["Aixplain", "aixplain_v2"]
1 change: 1 addition & 0 deletions aixplain/enums/supplier.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ def load_suppliers():
suppliers = Enum(
"Supplier", {clean_name(w["name"]): {"id": w["id"], "name": w["name"], "code": w["code"]} for w in resp}, type=dict
)
suppliers.__str__ = lambda self: self.value["name"]

return suppliers

Expand Down
3 changes: 3 additions & 0 deletions aixplain/modules/agent/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -389,3 +389,6 @@ def deploy(self) -> None:
assert self.status != AssetStatus.ONBOARDED, "Agent is already deployed."
self.status = AssetStatus.ONBOARDED
self.update()

def __repr__(self):
return f"Agent(id={self.id}, name={self.name}, function={self.function})"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agent has no function properly saud. Please set the status instead

52 changes: 17 additions & 35 deletions aixplain/modules/pipeline/asset.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,9 +112,7 @@ def __polling(
while not completed and (end - start) < timeout:
try:
response_body = self.poll(poll_url, name=name)
logging.debug(
f"Polling for Pipeline: Status of polling for {name} : {response_body}"
)
logging.debug(f"Polling for Pipeline: Status of polling for {name} : {response_body}")
completed = response_body["completed"]

end = time.time()
Expand All @@ -126,13 +124,9 @@ def __polling(
logging.error(f"Polling for Pipeline: polling for {name} : Continue")
if response_body and response_body["status"] == "SUCCESS":
try:
logging.debug(
f"Polling for Pipeline: Final status of polling for {name} : SUCCESS - {response_body}"
)
logging.debug(f"Polling for Pipeline: Final status of polling for {name} : SUCCESS - {response_body}")
except Exception:
logging.error(
f"Polling for Pipeline: Final status of polling for {name} : ERROR - {response_body}"
)
logging.error(f"Polling for Pipeline: Final status of polling for {name} : ERROR - {response_body}")
else:
logging.error(
f"Polling for Pipeline: Final status of polling for {name} : No response in {timeout} seconds - {response_body}"
Expand Down Expand Up @@ -162,9 +156,7 @@ def poll(self, poll_url: Text, name: Text = "pipeline_process") -> Dict:
resp["data"] = json.loads(resp["data"])["response"]
except Exception:
resp = r.json()
logging.info(
f"Single Poll for Pipeline: Status of polling for {name} : {resp}"
)
logging.info(f"Single Poll for Pipeline: Status of polling for {name} : {resp}")
except Exception:
resp = {"status": "FAILED"}
return resp
Expand All @@ -186,9 +178,7 @@ def _should_fallback_to_v2(self, response: Dict, version: str) -> bool:
should_fallback = False
if "status" not in response or response["status"] == "FAILED":
should_fallback = True
elif response["status"] == "SUCCESS" and (
"data" not in response or not response["data"]
):
elif response["status"] == "SUCCESS" and ("data" not in response or not response["data"]):
should_fallback = True
# Check for conditions that require a fallback

Expand Down Expand Up @@ -294,10 +284,7 @@ def __prepare_payload(
try:
payload = json.loads(data)
if isinstance(payload, dict) is False:
if (
isinstance(payload, int) is True
or isinstance(payload, float) is True
):
if isinstance(payload, int) is True or isinstance(payload, float) is True:
payload = str(payload)
payload = {"data": payload}
except Exception:
Expand Down Expand Up @@ -335,9 +322,7 @@ def __prepare_payload(
asset_payload["dataAsset"]["dataset_id"] = dasset.id

source_data_list = [
dfield
for dfield in dasset.source_data
if dasset.source_data[dfield].id == data[node_label]
dfield for dfield in dasset.source_data if dasset.source_data[dfield].id == data[node_label]
]

if len(source_data_list) > 0:
Expand Down Expand Up @@ -410,9 +395,7 @@ def run_async(
try:
if 200 <= r.status_code < 300:
resp = r.json()
logging.info(
f"Result of request for {name} - {r.status_code} - {resp}"
)
logging.info(f"Result of request for {name} - {r.status_code} - {resp}")
poll_url = resp["url"]
response = {"status": "IN_PROGRESS", "url": poll_url}
else:
Expand All @@ -428,7 +411,9 @@ def run_async(
error = "Validation-related error: Please ensure all required fields are provided and correctly formatted."
else:
status_code = str(r.status_code)
error = f"Status {status_code}: Unspecified error: An unspecified error occurred while processing your request."
error = (
f"Status {status_code}: Unspecified error: An unspecified error occurred while processing your request."
)
response = {"status": "FAILED", "error_message": error}
logging.error(f"Error in request for {name} - {r.status_code}: {error}")
except Exception:
Expand Down Expand Up @@ -477,9 +462,7 @@ def update(

for i, node in enumerate(pipeline["nodes"]):
if "functionType" in node:
pipeline["nodes"][i]["functionType"] = pipeline["nodes"][i][
"functionType"
].lower()
pipeline["nodes"][i]["functionType"] = pipeline["nodes"][i]["functionType"].lower()
# prepare payload
status = "draft"
if save_as_asset is True:
Expand All @@ -497,9 +480,7 @@ def update(
"Authorization": f"Token {api_key}",
"Content-Type": "application/json",
}
logging.info(
f"Start service for PUT Update Pipeline - {url} - {headers} - {json.dumps(payload)}"
)
logging.info(f"Start service for PUT Update Pipeline - {url} - {headers} - {json.dumps(payload)}")
r = _request_with_retry("put", url, headers=headers, json=payload)
response = r.json()
logging.info(f"Pipeline {response['id']} Updated.")
Expand Down Expand Up @@ -554,9 +535,7 @@ def save(

for i, node in enumerate(pipeline["nodes"]):
if "functionType" in node:
pipeline["nodes"][i]["functionType"] = pipeline["nodes"][i][
"functionType"
].lower()
pipeline["nodes"][i]["functionType"] = pipeline["nodes"][i]["functionType"].lower()
# prepare payload
status = "draft"
if save_as_asset is True:
Expand Down Expand Up @@ -589,3 +568,6 @@ def deploy(self, api_key: Optional[Text] = None) -> None:
pipeline = self.to_dict()
self.update(pipeline=pipeline, save_as_asset=True, api_key=api_key, name=self.name)
self.status = AssetStatus.ONBOARDED

def __repr__(self):
return f"Pipeline(id={self.id}, name={self.name})"
Loading