Saigonauticon

joined 1 year ago
[–] Saigonauticon@voltage.vn 2 points 9 months ago (1 children)

An international parts order is too complex for such a small thing. I'm not in the USA or China. So no TP5000 for me, got to work with what I have.

I agree, no charging at 4.2 volts. The current charger I built seems to work well enough. I ran some tests and it charges within spec. The reason I turn off the charger to measure cell voltage is because otherwise I'll mainly be measuring SMPS noise.

Anyway it beats the charger available in the local market, which is clearly unsafe, no matter how much they assure me that it's 'totally OK'.

[–] Saigonauticon@voltage.vn 3 points 9 months ago

No worries! I appreciate that you were just trying to assist!

[–] Saigonauticon@voltage.vn 6 points 9 months ago (2 children)

Battery University is indeed a great resource!

However this is not a lithium polymer battery, and as it's a 32700, it is not a prismatic or pouch cell either. It is a lithium iron phosphate (LiFePO4) cylindrical battery in metal housing. Battery University does have them listed in their table of chemistries (in case you're curious), but they don't seem to have much detailed information. Enough to build a charger though :)

https://batteryuniversity.com/article/bu-216-summary-table-of-lithium-based-batteries

Also some more detailed information here:

https://batteryuniversity.com/article/bu-205-types-of-lithium-ion

Anyway, thanks for your reference in any case! I'm not responding to criticize you, only to improve the utility of this conversation in case someone else finds it on search :)

 

So, there are these great 32700 LiFePO4 batteries that showed up in my local industrial market. For like USD 2$!

However, there are no LiFePO4 chargers available. The vendors assure me I can "totally use" a 4.2V Li-ion charger, but I don't believe them (although the cells test as being in good shape).

I whipped up a 5V system with a buck converter managed by an MCU. It turns off the buck converter that charges the battery, measures the battery voltage, and if it's under 3.6V it enables the buck converter. Repeats every few 100s of milliseconds.

Did I overengineer this? Could I have just used a linear voltage regulator that outputs 3.6V (or a Zener), and a current-limited 5v power supply?

Charge speed is not really important in my application. Anything under 4 hours is great. Frankly, I'm just trying to phase out the less safe kinds of lithium cell in my lab.

[–] Saigonauticon@voltage.vn 1 points 9 months ago

No, that won't work.

A vibration switch will work.

If that's not sensitive enough, another option is using a piezo element coupled to the case to detect vibration, with an op-amp or hex inverter to buffer + trigger the 555. However if you couple it too closely with e.g. the floor or furniture it will pick up nearby footsteps or cars. Might be good depending on the situation.

[–] Saigonauticon@voltage.vn 1 points 10 months ago

That sounds even better!

[–] Saigonauticon@voltage.vn 1 points 10 months ago (2 children)

Hm, that reminds me! If you're designing your own PCB, some manufacturers will make the PCB out of aluminum for you instead of FR4. This is commonly used for high-intensity LED lights to help keep them cool.

Here's some random info about them so you can see what I mean:

https://www.pcbgogo.com/Article/An_Introduction_to_Aluminum_PCBs_by_PCBGOGO.html

An alternative would be copper-clad polyimide adhered to the body. That also has better thermal properties than FR4.

[–] Saigonauticon@voltage.vn 1 points 10 months ago (4 children)

In seconds? Wow. I think you're right, you might need more than a small fan!

It might be worth exploring heat pipes or peltier effect coolers. The latter makes the problem worse (they are inefficient and generate a lot of heat) but your LED can be locally cooler if you can e.g. move all that extra heat into a big heatsink (also condensation can be problematic).

One cheap source of heat pipes for testing could be old graphics cards -- they often outperform simple copper heat sinks. Use thermal epoxy to stick your LED to it and see if the performance is acceptable. On the exotic end of things, you could also water/oil cool it, or (carefully) make your own thermal grease from industrial diamond powder for a small boost in thermal conductivity.

Also even at 95% efficiency, it sounds like your boost converter has some heat to dump too!

[–] Saigonauticon@voltage.vn 1 points 10 months ago

Hah! I totally didn't notice that. Good catch.

[–] Saigonauticon@voltage.vn 2 points 10 months ago (10 children)

These are the smallest fans that I know of: https://www.mouser.com/new/sunon/sunon-mighty-mini-fan/

They go down to 9mm x 9mm x 3mm.

If this is to cool some component in the flashlight, have you considered a heatsink instead?

[–] Saigonauticon@voltage.vn 1 points 11 months ago* (last edited 11 months ago) (1 children)

Yup, seen that for sure.

Did you try turning the little adjustment knob in your probe to calibrate it? Sometimes needs a small screwdriver. Here's a reference:

https://www.elecrow.com/download/HowToCalibrate10xProbe.pdf

What I'm referring to is labelled 'Cap Trimmer'. The document also has some waveform images that match your problem.

I have a Siglent and it looked like this at the dealership, then they adjusted the probe a bit, and then it was 100% fine.

[–] Saigonauticon@voltage.vn 2 points 11 months ago

Glad to help :)

Besides the I/O and supporting hardware, the clock speed is wildly different between these 3 chips -- that's worth considering. By that metric, the ATMEGA based designs are the slowest by far -- although somewhat faster than you'd estimate since they usually operate 1 instruction per clock cycle, whereas the other chips are a few clock cycles per instruction (they are still way faster than the ATMEGA line though).

Regarding pre-made boards vs. your own? I think there are three things to consider:

  1. Pre-made boards are awesome for prototyping. Making sure the damn thing will work (feature-complete) before designing your own board is a good idea. Then, make your first board with all features added in (this is important), but expect to iterate at least once (make revisions and order boards a second time). There's no such thing as premature optimization in hardware design -- it's not like software where you can just design the core of an application and then build features as you go. This is why always designing prototypes to be feature-complete is a good workflow, and generic development boards are a good starting point for this.

  2. Designing your own board is really easy for AVRs. I do this all the time, lately with the Attiny10. Honestly there are a ton of AVR chips out there, and not all of them have affordable / popular development boards, so often it's worth making your own for use in item 1 above (...really you just need at minimum power and a header to break out the pins for ISP programming). Then when you want to make your final widget, you just expand your development board design, which lets you make a really miniaturized and streamlined thing! You will need an ISP programmer though, like the AVR-ICE (which has a nasty but minor bug in the design -- ping me before buying one and I'll save you 2 days of headaches setting up).

  3. A neat trick is to design your own boards and still use a dev board (so making your own boards and buying premade dev boards are not mutually exclusive options). This is especially useful with the Pi Pico and ESP32 (where making a dev board is less beginner-friendly) -- a cheatcode is "castellated mounting holes". These let you solder (for example) a Pi Pico dev board directly to your own design as a surface-mount component. You can do this by just adding a socket and using header pins too, but SMT + castellated mounting holes lets you keep the design small and reliable.

BTW when designing your own boards, committing to SMT parts (where possible) early on is one of the things I'm really glad I did. You don't need much tooling to do it. Just a solder paste syringe, a toothpick or pin, some tweezers, and a hot air rework station (included in some soldering stations). Even 0402 parts (about the size of two grains of salt) are pretty easy to do by hand. It's amazing the level of miniaturization that you can achieve these days this way, as a private individual with a very modest budget!

Finally, the Arduino products are generally very good dev boards, whether or not you're using the Arduino IDE (you can still program them ASM or non-Arduino C++). So for any chip that an Arduino exists for, it's an excellent starting point -- although you may want to design your own board one day to remove unnecessary stuff if it comes out cheaper and you go through a lot of them, or just for the experience.

[–] Saigonauticon@voltage.vn 1 points 11 months ago (2 children)

There are a lot of differences, but I'll try and go over the high level ones. The RP2040 is a chip, and the others are boards -- so I'll compare the chips on them.

The RP2040 chip is really powerful overall, and does some odd things with I/O that let you do a bunch of very fast, precise things. You also get a lot of I/O pins and they are very well-behaved. The main advantage though is that it works well in both Python and C++, and is well-supported.

The ESP32 based board (Thing Plus) has integrated WiFi. The ESP32 is a great chip, I use it a lot, but it has some unfortunate quirks. First, it has a very high clock speed and decent memory, making it quite powerful. However, if you glitch out the network stack via your code, it can have some problems with unexpected resets. This was much worse with the earlier-generation ESP8266. Secondly, the I/O work much more slowly than the system clock (if I recall correctly), and they are picky about what state they have on startup -- some go high as part of the boot process, others must be high or low on boot but can be used after. This is actually quite a pain sometimes. It's a great chip overall though and works well in C++.

The Pro Micro uses an ATMEGA32 chip. I'm a huge AVR fan so I don't have many bad things to say, I like it a lot. It is much slower than the other two chips though, and has less memory. Probably it's best to use C++, but you ought to be able to use Assembly too if you like. The I/O on AVRs are really well-behaved and usually operate at the same speed as the chip, which is nice when you need precise timing! The best thing about it though, is it can use much less power than the other two options, if you use the sleep modes right. So you can build neat battery-powered applications. Finally AVRs have excellent datasheets -- there's rarely any ambiguity on exactly how any system on the chip works.

Overall, I'd choose an RP2040 board if I wanted to use Python and do IoT/Robots/whatever (you can buy boards with or without WiFi), an ESP32 based board if I wanted to do IoT stuff in C++, and the Pro Micro if I wanted to do low-level, low power embedded stuff in C++ or assembly (and maybe branch out into other AVR chips). The C++ options mean you can use the Arduino IDE and their libraries.

 

Disclaimer: this is not specifically for a commercial product, but various things I design sometimes get commercialized. I mention this so that you may decide whether you want to weigh in. If it's commercialized, I will probably make very little money but a bunch of university students may get a neat STEM program in the countryside :D

That out of the way, I've designed some boards for a Wi-Fi controlled robot with mechanum wheels. So 4 independent motor drivers, one for each wheel, allow omnidirectional motion. It's built around a Pi Pico W, 4 SOIC-8 9110S motor drivers, and some buck/boost converters to give the system a 5V and 12V line. It's very basic, mostly made to be cheap. Here's a photo:

Right now it just receives UDP communications (a little app written in Godot) and activates the motors in different combinations -- very "hello world". I'm planning to add some autonomy to move around pre-generated maps, solve mazes, and so on.

I have foolishly used 2-pin JST connectors for the motors, so using motors with rotary encoders would be a pain without ordering new boards. I'll probably fix that in a later board revision or just hack it in. Also the routing is sloppy and there's no ground plane. It works well enough for development and testing though :D

What I'm thinking about right now, is how to let the robot position itself in a room effectively and cheaply. I was thinking of adding either a full LiDAR or building a limited LiDAR out of a servo motor and two cheap laser ToF sensors -- e.g. one pointed forward, the other back, and I can sweep it 90 degrees. Since the LiDAR does not need to be fast or continuously sweep, I am leaning toward the latter approach.

Then the processing is handled remotely -- a server requests that the robot do a LiDAR sweep, the robot sends a minimal point cloud back to the server, which estimates the robot's current location and sends back some instructions to move in a direction for some distance -- probably this is where the lack of rotary encoders is going to hurt, but for now I'm planning on just pointing the forward laser ToF sensor towards a target and give the instruction "turn or move forward at static speed X until the sensor reads Y", which should be pretty easy for the MCU To handle.

I'm planning to control multiple robots from the same server. The robots don't need to be super fast.

What I'm currently wondering is whether my approach really needs rotary encoders in practice -- I've heard that mechanum wheels have high enough mechanical slippage that they end up inaccurate, and designers often add another set of unpowered wheels for position tracking anyway. I don't want to add more wheels in this way though.

On the other hand, it would probably be easier to tell the MCU to "move forward X rotary encoder pulses at a velocity defined by Y pulses per second, and then check position and correct at a lower speed" than to use a pure LiDAR approach (e.g. even if rotary encoders don't give me accurate position, on small time scales, they give me good feedback to control speed). I could possibly even send a fairly complex series of instructions in one go, making the communications efficient enough to eliminate a local server and control a ton of robots from a cloud VPS or whatever.

Anyone have some experience with encoders + mechanum wheels that can offer a few tips my way? At this stage the project doesn't have clear engineering goals and this is mostly an academic exercise. I've read that using a rigid chassis and minimizing the need for lateral motion can reduce slippage, reading through a few papers didn't get me any numerical indication of what to expect.

 

So I wanted to design a children's toy, where the electronics could last 100 years (ignoring mechanical abuse). I figured some people here might be interested.

I settled on a CR2032-powered night light, using an attiny10 microcontroller, where the flash is rated for 100 years unless you're writing to it (which I am not). I did some pretty heavy power optimization. The firmware is hand-optimized assembly.

When you turn it upside-down, a tilt switch toggles an LED @ 3mA via a pretty intense debouncing routine.

A watchdog timer has it auto power off in 30 minutes.

When off, it consumes less than 1 uA. So it has about 25 years of standby time, although the battery is only rated for 10 years (it is replaceable though).

If a child uses it every day, then the battery should last about 4.5 months.

I made custom boards for it -- I kept is simple with few components as possible (resistor is for scale):

I kept assembly simple. A better design would snap right in to the pins of the CR2032 holder, but that's an addition I'll make another day. I also should have added one more ground pad to solder to, but forgot. Still, an OK result I think.

I used some spay-on lacquer to protect the traces a bit after assembly.

 

What I've done is take a large 2n3055 BJT NPN power transistor, and decap it (it is a large metal-can type). Then I carefully removed any coating from the exposed silicon (it typically has a dab of silicone potting compound on it).

Then, I had a weak alpha source at ~5MeV lying around the lab from previous work. This was inserted into the can with the beam facing downward towards the exposed silicon, and the can reattached and made lightproof.

Then I threw together the circuit shown here using the modified transistor (the base is left floating). What I expected to happen was that at TP1 (relative to GND), with my scope AC-coupled, I should see small voltage spikes followed by a decay. This is caused by alpha particles impacting the silicon and knocking loose enough electrons to permit some current flow.

However, I just see... more or less nothing, maybe some electrical noise from fluorescent lamps in the room next door. Certainly not the spike+decay curve I've seen with other detectors.

Did I make a wrong assumption somewhere? It's been a while since I worked with discrete transistors much, and I feel like I am missing something silly.

Or is this more or less right, and I should maybe question whether my alpha source is still good? Or whether the signal strength is in a voltage domain I can even clearly see without amplification? Or maybe I should suspect that a thin passivating glass layer is added to big BJTs these days, enough to block the alpha?

The source is past expiry, but not by that much. I'm mostly interested in characterizing and documenting the detector as an academic exercise.

 

I've wondered this occasionally over the years, but never got it working.

I tried just putting a dried piece of chicken bone pressed between two plates (mild compressive stress perpendicular to the bone), and using an inverter just like I would use a crystal. It did not work. Maybe I need a really thin segment?

I have no practical application in mind. I might make a CPU from it for Halloween I guess?

I'm not sure if I would classify it as electronics or necromancy, but I thought it was an interesting question to ask here :)

view more: next ›