chapter11, Isn’t optimization needed anywhere?

With recent compilers, the optimizations are terrific, so it may be best to let the compiler do it without having to do the risky optimization yourself.

For example, the following code, seems inefficient, but the compiler optimization expands the constructor on the returned object. It doesn’t bother to create an object on the stack and then “pass by value” to the return value.

CHash create_sha256() {
    CHash obj;
    obj.algo = CHash::ALGO::SHA256;
    return obj;

CHash hash = create_sha256();

When “pass by reference” becomes available, they seem to think that this is an efficient way. However, if you can get the same optimization with safe “pass by value”, you should use “pass by value”.

chapter9, OpenSSL multi-threading support

This code stores all objects in the memory allocated by “OPELSSL_malloc”. However, as of now, “OPENSSL_malloc” is equivalent to “malloc”, so you may not need to do so far.

used “new” code.

used “placement new” code.

chapter8, Placement new

There is a grammar, “placement new”, that creates an instance on an already allocated memory address.

However, not all the contents of the instance need external memory (new), and even that is new, it goes outside. For this reason, we would like to use this only when a new instance is required in situations where wrapped memory allocation functions are required rather than speed.

For example, “OPENSSL_malloc”. When allocating memory here, “new” cannot be used. So, after allocating here, use “placement new”. However, care must be taken to ensure that the objects created here do not use external memory. Also, “placement new” needs to manually call the destructor. It is this for “placement new” that the destructor can be called directly for some reason.

chapter7, Before the search, existing informations are removed and starting position is exchanged any one

This is a common technique we used when building the data recovery logic. In data recovery, there is a long process to find one correct answer from the read records. Therefore, when processing from the head, it takes time because of the number of steps.

Drives that require data recovery contain a lot of missing information. In other words, if you reconstruct the missing information in advance, you can collect information that can be removed first.

This information is assigned to the retrieved records, removing unnecessary information before processing. Then, instead of starting at the beginning of the search, we randomly select one of the remaining record.

Why select again at random? The amount of computation is negligible, since it is only select randomly. However, it is more likely that the correct record will be included in the remaining records after removing the unnecessary ones, instead of the header you select.

This is best seen with a hundred cards. Mark one of the cards and make it the correct answer. Shuffle and take one. Then remove the 98 unmarked cards. You now have one card and one remaining card. By the way, it is already obvious which is marked. This is the reason for swapping.

chapter6, Faster with code optimization

In the last ten years, the grammar has changed and it has become faster. Before, I used many references for speeding up, and incorporated a shift operation and a bit operation for speeding up. What about now? For example, use a shift operation to perform a division? In the era of Visual C ++ 2005, even this was unexpectedly faster. But with current compiler and CPU, there is no problem, with division or modulo as usual.

However, bit operations have many uses. For example, an operation that quickly returns the position where a bit is set. I want to use them positively.


The drive management information has a fixed value and a variable value. When managing these, they have separate structures.

Therefore, by taking the hash of the instance of the structure of the fixed value, the drive can be identified from the hash value. And this fixed value is “DRIVE_IDENTIFY_INFO”.

The structure uses an aggregate. For this reason, you can always initialize with {0}. In the case of an aggregate, a “method” of using external memory management cannot be included. However, since many functions that operate a drive require a direct memory operation, the aggregate is used once for receiving. Moving to a non-aggregate class only when used will give a better overall perspective. This eliminates the need to re-store the object instance in memory, pass it to the function, and pass over the return value to the object when using the function that operates the drive later. Of course, the fixed value is obtained only once, but in practice it is passed including the variable value, so even when updating the variable value, it is necessary to perform processing including the fixed value.

chapter4, Data structure [DRIVE_IDENTIFY_INFO]

This is one of the structures that store the getting drive information.

typedef uint64_t sector_t;
typedef uint32_t cluster_t;
typedef uint32_t param_t; 
typedef uint16_t wchar_t;
static const int STRING_LENGTH = 128;
typedef struct  _DRIVE_IDENTIFY_INFO
    identify_support_type ist;
    identify_support_extend ise;
    sector_t cylinders;
    param_t tracksPerCylinder;
    param_t sectorsPerTrack;
    param_t bytesPerSector;
    sector_t sectorsGetDevice;
    sector_t sectorsGetIdentify;
    sector_t sectorsLbaLimit;
    param_t rpm;
    param_t buffer_size;
    param_t buffer_type;
    param_t nvcache_size;
    param_t aam_recommended;
    wchar_t model_name[STRING_LENGTH];
    wchar_t vendor[STRING_LENGTH];
    wchar_t firmware[STRING_LENGTH];
    wchar_t serial[STRING_LENGTH];

chapter3, Data structure [DRIVE_FAILURE_INFO]

When managed by the central server, processing perform using the following binary.

typedef struct _DRIVE_FAILURE_INFO
    size_t size;
    int version;
    uint8_t hash[64];
    uint8_t binary[1]; 

In the transition to Decentralization, this structure is accumulated as a database through a serialization interface, and this confirmation is performed by the blockchain.

chapter2, Centralized management

This is a method of accumulating and analyzing data on a central server.


This mechanism has been collecting drive-specific information and the location of bad-sectors.

The centralized server goes back the result.

If any elements are missing from the analysis, you will perform additional checks.

The result of drive with checked has displayed.

chapter1, Collect and analyze drive information

Firstly, we collect drive failure information and find out what they can do.
It is in the following link.

Statistically, it degrades unevenly.
For example, if a similar model begins to break, it tends to continue.

If you do this stand alone, it has the following issues.

  • The amount of data increases so much that processing becomes difficult in a short time.
  • It is difficult to delete unnecessary data.

Recommended SSD !

@sorachan Hello! Tell me in the price range of drives, I want to replace myself, it’s possible and more expensive, most importantly good) SSD Samsung 860 Evo 250GB

Hello !

If you select SSD at the best price range, we recommend Intel. This is to achieve optimal performance in a “blockchain” with many writes.

Please manage your password carefully

I usually spend some time with Discord’s DM to say what I am doing.

In recent years, it seems that many people forget their wallet passwords recently.

If you forget, coins will be lost.
Even management cannot move coins for which the password has been forgotten.

By the way, this password is for wallet.dat itself. Therefore, even if you back up after setting the password, and then set “another password”, the previously backed up wallet.dat is “leave the previous password”. Using this property, if you keep them with different passwords, at the worst, if you remember at least one, all coins there will be OK.

Be careful with password management.

Manage by myself funds

I have been drinking at home recently, probably because I’m tired of going out for a drink.

Because of that, I have seen more articles.
The price of the cryptocurrency reacts to such factors.

And gold.
However, I have seen that there are discrepancies in reserves.

I’m preparing my own asset management so that it doesn’t happen.