-
Notifications
You must be signed in to change notification settings - Fork 997
coupling of service.yaml on the continuous-deployment.yaml #33
Comments
Thanks for opening this! Your understanding is definitely correct. It'll also reset parameters like the |
I'm not sure there's a great answer here about how to handle both CloudFormation and CodePipeline mutating the same set of resources. If your goal is to manage your infrastructure using CloudFormation, you'd better served using CodePipeline's CloudFormation integration in order to create new task definitions and change the service's task definition ARN rather than using the native CodePipeline ECS integration on deployment. This would allow you to change the template in the S3 bucket or provide new subnet IDs in the pipeline and maintain a consistent state. This doesn't solve the You can use this version as a reference showing how to implement this. Note that Service is not a nested stack of the main template here. I'm going to leave this open until a better answer emerges. |
I'd just like to note that I've also been bitten by this. More of a chicken and egg situation: I need to define a custom entry point, thus I really need a non-example container but I can't get that till the stack is loaded and the pipeline running. |
Thanks @jinty. 😔 Wish I had a better answer here. In some ways, the flexibility of the CloudFormation approach driven by CodePipeline lends itself to a more reusable example. |
I just wanted to let you know that I've spent the last couple week building this version of the CD pipeline. It seems to work fine and I'm pretty happy with it. I ended up using One major problem is that if I push a program that fails to launch but builds fine in docker (maybe because of a missing parameter or something), ECS will keep trying to start the service over and over and it takes CloudFormation 3 hours to time out. It's not great to have the pipeline stall for 3 hours. There seem to be no way to shorten that timeout. (I spent about 3 hours trying 😉 ) Obviously I'll need integration tests before trying to deploy to ECS, but another thing to try would be to create the TaskDefinition with the new image tag as another Pipeline stage and try to launch it via As for the I recommend merging the alternate implementation into master because what's on master right now isn't a usable example for anyone building and deploying their own applications. Otherwise I think everyone using this as a starting point will get bitten by this issue eventually. |
Thanks, @SunlightJoe. Can you expand pon "merging the alternate implementation" suggestion? Are you suggesting moving back toward using CloudFormation for the deploy stage? |
Yes, that's the one I mean. |
I think I've found another solution. When I was looking for a way to get around the stuck ECS service, somebody suggested setting the I added a parameter for the After the first successful run of the pipeline that creates an image, I rerun the cloudformation via This doesn't solve the problem of resetting the |
@SunlightJoe this seems like a great workaround, however, I ran into a strange issue:
The only fix for this seems to be to do a local docker build/tag/push with a "latest" tag, which will do something to the repository that makes the Fargate task now able to pull the commit hash tag it was trying to pull all along. Edit: I fixed this. By adding a build and post_build command that creates a latest tag, this now works. The steps you go through are:
Here is my
|
Since the service.yaml is defined in the ecs-refarch-continuous-deployment.yaml, whenever we have changes on that file like adding a new subnet, it will update the service back to using the initial image: "amazon/amazon-ecs-sample". Which might be undesirable.
Assuming my understanding is correct, any ideas how to mitigate that? Or how to go around it?
The text was updated successfully, but these errors were encountered: