?

Log in

No account? Create an account

Previous 10

May. 4th, 2018

fedora 蓝色小药丸

#thotcon badge?

[root@m4700 ~]# esptool write_flash 0x00000 /home/yuan/tracking/thotcon0x9/tc0x9.bin 
esptool.py v2.3.1
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Warning: Could not auto-detect Flash size (FlashID=0x1851f, SizeID=0x1), defaulting to 4MB
Flash params set to 0x0240
Compressed 296480 bytes to 211973...
Wrote 296480 bytes (211973 compressed) at 0x00000000 in 18.8 seconds (effective 126.0 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...

不知道为什么,Windows 下 curl 只下载到一半文件, esptool 的 exe 和 py 都报 timeout receiving header,只能看到板子上有灯在闪。 Linux 下一下子就刷完了。大概是因为转到 Linux 下尝试之前拨动了开关,我本来以为刷 flash 一定要把开关拨到 off

刚拿到的时候,TPP 那一页可以显示几行字,现在停在 "Loading" 不动弹。大概是坏掉了吧。
Tags:

Apr. 24th, 2018

fedora 蓝色小药丸

Portfolio weighted averages

Last week I got an assignment to make a small Java library for some often used portfolio calculations. A portfolio is a collection of holdings, each holding has a weight, has multiple associated attributes. All data type is Double. The calculation involved are just sum and averaging, which is so easy in Java 8 Stream. So I said there is nothing to re-use about.. plus a very generic routine is hard to understand and use. As an example I hand crafted a "classify" method that accepts a Function<H,C> to find its classification, and also uses a Map<C,Double> as its internal state, a method to update this state, a method to export a result. And that method internally calls another GenericAccumulator that simply iterates input. Long/Short handling was added as part of the iteration loop, and sum/averaging on the other hand was part of the "classify" method body. This successfully confused everyone including myself.

Read more...Collapse )

Apr. 20th, 2018

fedora 蓝色小药丸

Installed nvidia driver

Since Fedora 28 wayland is crashing for me (Dell m4700 with external monitor), I got some time to try the nvidia GPU driver. At first I looked at https://www.if-not-true-then-false.com/2015/fedora-nvidia-guide/ which shows the actual output when installed: in "About" page it will show the nvidia card name. Then since it is easier to install from RPM Fusion, I followed https://rpmfusion.org/Howto/NVIDIA. The document is concise but helpful. For example, when it says "Secure Boot" has issue, then it is best turned off in BIOS. For another example, when it says "Wayland" has issue and must install something from Copr, indeed that is the case. Also the "grubby" command to update kernel command line is helpful too.

I followed another article https://gorka.eguileor.com/vbox-vmware-in-secureboot-linux/ to sign the modules. First create a key, then register the key to UEFI, I never did this before. I cannot find the keyring ".system_keyring" but /proc/keys shows something else.

Read more...Collapse )

Jan. 25th, 2018

fedora 蓝色小药丸

Terraform tips (6)

To create resource without further updating it, use "ignore_changes" lifecycle property.

There are two cases we used this as a workaround. First, the lambdas are created with Terraform, but the code and configuration updates are in separate process. To prevent Terraform from overwriting code, the source_code_hash property can be ignored.

Another one is aws_lambda_alias. The issue is also caused by the 2-step process, that a version cannot be published until the last moment. Fortunately, the "function_version" can be ignored.

Another tip I want to mention is to read the code of verified modules. As I said in a previous post, Terraform lacks macros, so everything is repeated. And defining resource in modules is hard to manage. But a well written module seems to work. A module typically only defines one core resource, like one lambda or one s3 bucket. It might be overkill to wrap into a module, but writing shell scripts to generate code is not fun. What shell scripts can do, while module cannot, is optional properties that requires a value.

Tags:

Dec. 24th, 2017

fedora 蓝色小药丸

Terraform tips (5)

The AWS resource is not applied in real-time. For some, the provider can wait for resource created, or poll the resource until it is created. But even provider and the service agreed, another resource may still hold an older view.

The document calls it "eventual consistency". The end result is that some "tf apply" may fail first but become successful in a second or later run.

Two resources have this issue so far:

— IAM role and policy attachment. When created role, attaching policy usually also works, but a user of the policy will likely see a role created but policy is not yet attached. The user should have "depends_on" to the attachment, and the attachment should have some sleep. Currently I set it to sleep 15s, which is acceptable.

— cloudwatch alarms also depends on policy being created. Similarly add depends_on and add 15s sleep. There is a bug reporting that policy not created after the target is re-created, and the workaround was to use interpolation in policy's name. But then that ticket did not mention the sleeping part.

fedora 蓝色小药丸

Terraform tips (4)

1. generate files for providers and for backend

provider file need to set provider version, and the content will be the same in all "root" modules. It is best to not to repeat oneself, checking-in the same file multiple times. It is not javascript..

similarly the backend is defined in a file that is best generated too. One reason is different use case requires different configurations -- automation has different AWS credentials, while running locally uses a different state file path and credential parameters; manually modify files in order to run it is too expensive. Also it prevents from mistakes manually updating this file. Another is that since files can be generated, it is easy to substitute a state file path.

2. generate files for different environments

same as #1, variable values can be generated. Settings like "vpc" can be defined in one place then get substituted and copied to the "root' module folder to use, save a lot of duplicate and manual works.

3. generate unimportant resources

since there is a chance to run shell scripts to generate files, why not to generate everything..

actually it is because Terraform lacks the ability to define any macros. A resource must exist or to be created, one cannot create a different number of resources on different environments. Suppose a list of s3 bucket names is given and Terraform has to repeatedly define those resources.. even one can use "count" as a workaround, the resource name will be awkwardly hard to use.

Read more...Collapse )
fedora 蓝色小药丸

Terraform tips (3)

(It will be shorter afterwards since I don't know much, just used it frequently recently.)

1. add alias

in .bashrc, alias "terraform" to "tf" which is the file extension. It saves a lot of typing

2. input, output and local variables

when declare a variable as "var", it is an "input" variable. And when declare a variable as "output", literally it is that. They are _not_ for debug purpose. The input and output is relative to a module.

And then "locals" defines variable in tfvars format. It is even not taught in detail in documentation, but it is the most useful feature. The reason is that "var" cannot use interpolation, while "output" cannot be referenced in current module. The only way to define something useful is through "locals".

3. modules and root module

A folder can be loaded as a module. There is only one "root" module when terraform executes. And for each module, terraform will load all files in that folder. Recognized files are

— all .tf files

— all .auto.tfvars files

— I name the generated files ".auto.tf", which follows the first rule.

— special file terraform.tfvars. I avoid this special file at all -- put it in .gitignore just in case. When I have to use a tfvars file, for the generated files I name it ".auto.tfvars" which follows the second rule. For different environment I may also define "terraform.PROD.tfvars" which _won't_ get loaded until one uses the "-var-file" parameter.

Read more...Collapse )
fedora 蓝色小药丸

Terraform tips (2)

I started my career with XSLT programming, XSL-FO. And Terraform is much similar to XSLT. It is like I have a hammer for every nails. People hate XSLT. The syntax is unlike a programming language, in 1.0 version there is limited functions (thanks to the extension packs it can at least do something.) And once you get the idea of template matching, with the matching criteria magically worked, there is only one input and one output so you would not have a large program. Everything is text, the input, the code, the output. Extremely friendly to unix tools, except that XML itself is so bad for unix tools. Any structured text is bad for command line tools. Even if one can use XPATH and tools like "jq", who can remember all those syntax?

I cannot express all the complicated emotions to XSLT in my limited language skills. Similarly, Terraform share all the properties that why people might hate it. And it is worse, because

— there is not a document model to work on. The code itself is data. Terraform has data structures like list and map, but unfortunately they are only useful in interpolation. The resource itself can be linked to a data structure by using a workaround (count=...) but it would be great to manipulate resource and data structure in the same way. Even for interpolation, there is no way to match and filter something. The only thing comparable is still a workaround, like print two identical length list into one.

Read more...Collapse )

Dec. 23rd, 2017

fedora 蓝色小药丸

Terraform tips (1)

In the past two sprints (each sprint is two weeks but last one is a bit messier and longer) I have been working on some Terraform scripts. We exclusively use the AWS provider. The workflow is like this

— A Git repository has all the Terraform scripts, wrapper scripts in shell, as well as docker configurations (Dockerfile), Jenkins pipeline configurations.

— There are 2 types of Jenkins pipelines. One is to build one or more docker image, copying artifact from another Jenkins job. The tip here is to distribute the Dockerfile with the artifact can be helpful, as it can contain the exact artifact file name after maven's processing. But on the other hand, put Dockerfile in the same place as the Jenkins pipeline scripts makes it really easy to update (they often need to update together), especially in an environment like ours -- it takes time to get a PR merged to a source code repository, and takes a lot of time to get the artifact rebuilt just to update a Dockerfile.

Read more...Collapse )

Jul. 19th, 2017

fedora 蓝色小药丸

My tweets

  • Wed, 09:34: WIthout threading support, Outlook managed to introduce "Focused" and "Other" as tabs in the inbox. Now I got two inbox to check.
Tags:

Previous 10