Skip to content

Commit

Permalink
Merge pull request #2 from EpistasisLab/master
Browse files Browse the repository at this point in the history
merge
  • Loading branch information
perib authored Aug 15, 2022
2 parents d39f3be + 9e6ad7e commit 9bca3a9
Show file tree
Hide file tree
Showing 6 changed files with 53 additions and 9 deletions.
18 changes: 18 additions & 0 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: build-docs
on:
push:
branches:
- master
paths:
- docs_sources/**
- mkdocs.yml
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.x
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force
4 changes: 2 additions & 2 deletions docs_sources/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Cross-validation strategy used when evaluating pipelines.
<br /><br />
Possible inputs:
<ul>
<li>integer, to specify the number of folds in a StratifiedKFold,</li>
<li>integer, to specify the number of folds in an unshuffled StratifiedKFold,</li>
<li>An object to be used as a cross-validation generator, or</li>
<li>An iterable yielding train/test splits.</li>
</blockquote>
Expand Down Expand Up @@ -601,7 +601,7 @@ Cross-validation strategy used when evaluating pipelines.
<br /><br />
Possible inputs:
<ul>
<li>integer, to specify the number of folds in a KFold,</li>
<li>integer, to specify the number of folds in an unshuffled KFold,</li>
<li>An object to be used as a cross-validation generator, or</li>
<li>An iterable yielding train/test splits.</li>
</ul>
Expand Down
1 change: 1 addition & 0 deletions docs_sources/using.md
Original file line number Diff line number Diff line change
Expand Up @@ -722,3 +722,4 @@ A simple example of using TPOT-NN is shown in [examples](/tpot/examples/).
- TPOT will occasionally learn pipelines that stack several `sklearn` estimators. Mathematically, these can be nearly identical to some deep learning models. For example, by stacking several `sklearn.linear_model.LogisticRegression`s, you end up with a very close approximation of a Multilayer Perceptron; one of the simplest and most well known deep learning architectures. TPOT's genetic programming algorithms generally optimize these 'networks' much faster than PyTorch, which typically uses a more brute-force convex optimization approach.

- The problem of 'black box' model introspection is one of the most substantial criticisms and challenges of deep learning. This problem persists in `tpot.nn`, whereas TPOT's default estimators often are far easier to introspect.

23 changes: 22 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,32 @@ repo_url: https://github.com/epistasislab/tpot
edit_uri: edit/master/docs_sources/
docs_dir: docs_sources/
site_dir: docs/
theme: readthedocs
#theme: readthedocs
theme:
name: material
features:
- toc.integrate
palette:
# light mode
- scheme: default
toggle:
icon: material/brightness-7
name: Switch to dark mode

# dark mode
- scheme: slate
toggle:
icon: material/brightness-4
name: Switch to light mode

markdown_extensions:
- tables
- fenced_code
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.inlinehilite
- pymdownx.snippets
- pymdownx.superfences

copyright: Developed by <a href="http://www.randalolson.com">Randal S. Olson</a> and others at the University of Pennsylvania

Expand Down
6 changes: 3 additions & 3 deletions tpot/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1561,10 +1561,10 @@ def _evaluate_individuals(
]
]

self.dask_graphs_ = tmp_result_scores

with warnings.catch_warnings():
warnings.simplefilter("ignore")
tmp_result_scores = list(dask.compute(*tmp_result_scores))
tmp_result_scores = list(dask.compute(*tmp_result_scores, num_workers=self.n_jobs))

else:

Expand Down Expand Up @@ -1812,7 +1812,7 @@ def _mate_operator(self, ind1, ind2):
offspring.statistics["generation"] = "INVALID"
break

return offspring, offspring2
return offspring, offspring2, self.evaluated_individuals_

@_pre_test
def _random_mutation_operator(self, individual, allow_shrink=True):
Expand Down
10 changes: 7 additions & 3 deletions tpot/gp_deap.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,13 +131,17 @@ def varOr(population, toolbox, lambda_, cxpb, mutpb):
if op_choice < cxpb: # Apply crossover
ind1, ind2 = pick_two_individuals_eligible_for_crossover(population)
if ind1 is not None:
ind1, _ = toolbox.mate(ind1, ind2)
ind1_cx, _, evaluated_individuals_= toolbox.mate(ind1, ind2)
del ind1.fitness.values

if str(ind1_cx) in evaluated_individuals_:
ind1_cx = mutate_random_individual(population, toolbox)
offspring.append(ind1_cx)
else:
# If there is no pair eligible for crossover, we still want to
# create diversity in the population, and do so by mutation instead.
ind1 = mutate_random_individual(population, toolbox)
offspring.append(ind1)
ind_mu = mutate_random_individual(population, toolbox)
offspring.append(ind_mu)
elif op_choice < cxpb + mutpb: # Apply mutation
ind = mutate_random_individual(population, toolbox)
offspring.append(ind)
Expand Down

0 comments on commit 9bca3a9

Please sign in to comment.